Test Report: KVM_Linux_crio 18706

                    
                      c94ef6ff19ad65e169e276817a1b4f9eee2ec8a0:2024-04-22:34155
                    
                

Test fail (31/311)

Order failed test Duration
30 TestAddons/parallel/Ingress 152.56
32 TestAddons/parallel/MetricsServer 297.39
44 TestAddons/StoppedEnableDisable 154.38
87 TestFunctional/parallel/DashboardCmd 302.32
163 TestMultiControlPlane/serial/StopSecondaryNode 142.05
165 TestMultiControlPlane/serial/RestartSecondaryNode 48.19
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 419.59
170 TestMultiControlPlane/serial/StopCluster 142.18
230 TestMultiNode/serial/RestartKeepsNodes 313.66
232 TestMultiNode/serial/StopMultiNode 141.48
239 TestPreload 172.41
247 TestKubernetesUpgrade 421.93
271 TestPause/serial/SecondStartNoReconfiguration 58.53
317 TestStartStop/group/old-k8s-version/serial/FirstStart 277.45
339 TestStartStop/group/no-preload/serial/Stop 139.05
341 TestStartStop/group/embed-certs/serial/Stop 139.14
344 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.07
345 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
348 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
349 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 118.7
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
355 TestStartStop/group/old-k8s-version/serial/SecondStart 705.31
356 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.17
357 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.25
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.17
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.4
360 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 384.05
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 436.15
362 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 308.45
363 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 155.54
x
+
TestAddons/parallel/Ingress (152.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-934361 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-934361 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-934361 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [54f74d8d-0de6-4905-8880-cdc716c944b3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [54f74d8d-0de6-4905-8880-cdc716c944b3] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004854151s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-934361 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.437781994s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-934361 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.135
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-934361 addons disable ingress --alsologtostderr -v=1: (7.84795239s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-934361 -n addons-934361
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-934361 logs -n 25: (1.412484197s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-029298 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | -p download-only-029298                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-029298                                                                     | download-only-029298 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-330754                                                                     | download-only-330754 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-029298                                                                     | download-only-029298 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-330619 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | binary-mirror-330619                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35745                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-330619                                                                     | binary-mirror-330619 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-934361 --wait=true                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 17:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| ip      | addons-934361 ip                                                                            | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-934361 ssh curl -s                                                                   | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-934361 ssh cat                                                                       | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | /opt/local-path-provisioner/pvc-0ebcd1de-0138-48d2-b5bd-8d480b1e737e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-934361 addons                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | -p addons-934361                                                                            |                      |         |         |                     |                     |
	| addons  | addons-934361 addons                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | -p addons-934361                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-934361 ip                                                                            | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:57:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:57:32.913167   19497 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:57:32.913443   19497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:32.913454   19497 out.go:304] Setting ErrFile to fd 2...
	I0422 16:57:32.913458   19497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:32.913649   19497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 16:57:32.914295   19497 out.go:298] Setting JSON to false
	I0422 16:57:32.915237   19497 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2398,"bootTime":1713802655,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 16:57:32.915307   19497 start.go:139] virtualization: kvm guest
	I0422 16:57:32.917660   19497 out.go:177] * [addons-934361] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 16:57:32.920013   19497 notify.go:220] Checking for updates...
	I0422 16:57:32.920030   19497 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 16:57:32.921622   19497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:57:32.922949   19497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 16:57:32.924377   19497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:32.926241   19497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 16:57:32.927908   19497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 16:57:32.929605   19497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:57:32.962505   19497 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 16:57:32.964096   19497 start.go:297] selected driver: kvm2
	I0422 16:57:32.964115   19497 start.go:901] validating driver "kvm2" against <nil>
	I0422 16:57:32.964126   19497 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 16:57:32.964847   19497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:32.964928   19497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 16:57:32.980022   19497 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 16:57:32.980067   19497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:57:32.980266   19497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 16:57:32.980331   19497 cni.go:84] Creating CNI manager for ""
	I0422 16:57:32.980343   19497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:57:32.980354   19497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 16:57:32.980410   19497 start.go:340] cluster config:
	{Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:57:32.980492   19497 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:32.983473   19497 out.go:177] * Starting "addons-934361" primary control-plane node in "addons-934361" cluster
	I0422 16:57:32.985031   19497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 16:57:32.985079   19497 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 16:57:32.985097   19497 cache.go:56] Caching tarball of preloaded images
	I0422 16:57:32.985201   19497 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 16:57:32.985213   19497 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 16:57:32.985492   19497 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/config.json ...
	I0422 16:57:32.985512   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/config.json: {Name:mkfb81b895cc31bd1604cd73f5f7b7f89bcc4420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:32.985639   19497 start.go:360] acquireMachinesLock for addons-934361: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 16:57:32.985683   19497 start.go:364] duration metric: took 31.081µs to acquireMachinesLock for "addons-934361"
	I0422 16:57:32.985702   19497 start.go:93] Provisioning new machine with config: &{Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 16:57:32.985757   19497 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 16:57:32.987607   19497 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0422 16:57:32.987744   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:57:32.987783   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:57:33.002165   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0422 16:57:33.002644   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:57:33.003301   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:57:33.003322   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:57:33.003607   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:57:33.003807   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:33.003944   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:33.004092   19497 start.go:159] libmachine.API.Create for "addons-934361" (driver="kvm2")
	I0422 16:57:33.004139   19497 client.go:168] LocalClient.Create starting
	I0422 16:57:33.004189   19497 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 16:57:33.157525   19497 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 16:57:33.233084   19497 main.go:141] libmachine: Running pre-create checks...
	I0422 16:57:33.233108   19497 main.go:141] libmachine: (addons-934361) Calling .PreCreateCheck
	I0422 16:57:33.233603   19497 main.go:141] libmachine: (addons-934361) Calling .GetConfigRaw
	I0422 16:57:33.234046   19497 main.go:141] libmachine: Creating machine...
	I0422 16:57:33.234059   19497 main.go:141] libmachine: (addons-934361) Calling .Create
	I0422 16:57:33.234210   19497 main.go:141] libmachine: (addons-934361) Creating KVM machine...
	I0422 16:57:33.235492   19497 main.go:141] libmachine: (addons-934361) DBG | found existing default KVM network
	I0422 16:57:33.236263   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.236117   19519 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001831f0}
	I0422 16:57:33.236302   19497 main.go:141] libmachine: (addons-934361) DBG | created network xml: 
	I0422 16:57:33.236320   19497 main.go:141] libmachine: (addons-934361) DBG | <network>
	I0422 16:57:33.236336   19497 main.go:141] libmachine: (addons-934361) DBG |   <name>mk-addons-934361</name>
	I0422 16:57:33.236344   19497 main.go:141] libmachine: (addons-934361) DBG |   <dns enable='no'/>
	I0422 16:57:33.236366   19497 main.go:141] libmachine: (addons-934361) DBG |   
	I0422 16:57:33.236386   19497 main.go:141] libmachine: (addons-934361) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 16:57:33.236420   19497 main.go:141] libmachine: (addons-934361) DBG |     <dhcp>
	I0422 16:57:33.236432   19497 main.go:141] libmachine: (addons-934361) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 16:57:33.236442   19497 main.go:141] libmachine: (addons-934361) DBG |     </dhcp>
	I0422 16:57:33.236460   19497 main.go:141] libmachine: (addons-934361) DBG |   </ip>
	I0422 16:57:33.236473   19497 main.go:141] libmachine: (addons-934361) DBG |   
	I0422 16:57:33.236484   19497 main.go:141] libmachine: (addons-934361) DBG | </network>
	I0422 16:57:33.236495   19497 main.go:141] libmachine: (addons-934361) DBG | 
	I0422 16:57:33.242006   19497 main.go:141] libmachine: (addons-934361) DBG | trying to create private KVM network mk-addons-934361 192.168.39.0/24...
	I0422 16:57:33.310405   19497 main.go:141] libmachine: (addons-934361) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361 ...
	I0422 16:57:33.310441   19497 main.go:141] libmachine: (addons-934361) DBG | private KVM network mk-addons-934361 192.168.39.0/24 created
	I0422 16:57:33.310455   19497 main.go:141] libmachine: (addons-934361) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 16:57:33.310481   19497 main.go:141] libmachine: (addons-934361) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 16:57:33.310572   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.310240   19519 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:33.558499   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.558343   19519 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa...
	I0422 16:57:33.646061   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.645931   19519 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/addons-934361.rawdisk...
	I0422 16:57:33.646092   19497 main.go:141] libmachine: (addons-934361) DBG | Writing magic tar header
	I0422 16:57:33.646103   19497 main.go:141] libmachine: (addons-934361) DBG | Writing SSH key tar header
	I0422 16:57:33.646111   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.646050   19519 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361 ...
	I0422 16:57:33.646210   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361
	I0422 16:57:33.646245   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361 (perms=drwx------)
	I0422 16:57:33.646257   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 16:57:33.646272   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:33.646281   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 16:57:33.646291   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 16:57:33.646299   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins
	I0422 16:57:33.646309   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home
	I0422 16:57:33.646317   19497 main.go:141] libmachine: (addons-934361) DBG | Skipping /home - not owner
	I0422 16:57:33.646332   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 16:57:33.646358   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 16:57:33.646376   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 16:57:33.646388   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 16:57:33.646397   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 16:57:33.646405   19497 main.go:141] libmachine: (addons-934361) Creating domain...
	I0422 16:57:33.647547   19497 main.go:141] libmachine: (addons-934361) define libvirt domain using xml: 
	I0422 16:57:33.647575   19497 main.go:141] libmachine: (addons-934361) <domain type='kvm'>
	I0422 16:57:33.647585   19497 main.go:141] libmachine: (addons-934361)   <name>addons-934361</name>
	I0422 16:57:33.647593   19497 main.go:141] libmachine: (addons-934361)   <memory unit='MiB'>4000</memory>
	I0422 16:57:33.647603   19497 main.go:141] libmachine: (addons-934361)   <vcpu>2</vcpu>
	I0422 16:57:33.647614   19497 main.go:141] libmachine: (addons-934361)   <features>
	I0422 16:57:33.647622   19497 main.go:141] libmachine: (addons-934361)     <acpi/>
	I0422 16:57:33.647633   19497 main.go:141] libmachine: (addons-934361)     <apic/>
	I0422 16:57:33.647644   19497 main.go:141] libmachine: (addons-934361)     <pae/>
	I0422 16:57:33.647651   19497 main.go:141] libmachine: (addons-934361)     
	I0422 16:57:33.647662   19497 main.go:141] libmachine: (addons-934361)   </features>
	I0422 16:57:33.647670   19497 main.go:141] libmachine: (addons-934361)   <cpu mode='host-passthrough'>
	I0422 16:57:33.647684   19497 main.go:141] libmachine: (addons-934361)   
	I0422 16:57:33.647698   19497 main.go:141] libmachine: (addons-934361)   </cpu>
	I0422 16:57:33.647706   19497 main.go:141] libmachine: (addons-934361)   <os>
	I0422 16:57:33.647711   19497 main.go:141] libmachine: (addons-934361)     <type>hvm</type>
	I0422 16:57:33.647719   19497 main.go:141] libmachine: (addons-934361)     <boot dev='cdrom'/>
	I0422 16:57:33.647741   19497 main.go:141] libmachine: (addons-934361)     <boot dev='hd'/>
	I0422 16:57:33.647754   19497 main.go:141] libmachine: (addons-934361)     <bootmenu enable='no'/>
	I0422 16:57:33.647758   19497 main.go:141] libmachine: (addons-934361)   </os>
	I0422 16:57:33.647784   19497 main.go:141] libmachine: (addons-934361)   <devices>
	I0422 16:57:33.647808   19497 main.go:141] libmachine: (addons-934361)     <disk type='file' device='cdrom'>
	I0422 16:57:33.647829   19497 main.go:141] libmachine: (addons-934361)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/boot2docker.iso'/>
	I0422 16:57:33.647838   19497 main.go:141] libmachine: (addons-934361)       <target dev='hdc' bus='scsi'/>
	I0422 16:57:33.647851   19497 main.go:141] libmachine: (addons-934361)       <readonly/>
	I0422 16:57:33.647862   19497 main.go:141] libmachine: (addons-934361)     </disk>
	I0422 16:57:33.647872   19497 main.go:141] libmachine: (addons-934361)     <disk type='file' device='disk'>
	I0422 16:57:33.647891   19497 main.go:141] libmachine: (addons-934361)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 16:57:33.647906   19497 main.go:141] libmachine: (addons-934361)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/addons-934361.rawdisk'/>
	I0422 16:57:33.647919   19497 main.go:141] libmachine: (addons-934361)       <target dev='hda' bus='virtio'/>
	I0422 16:57:33.647926   19497 main.go:141] libmachine: (addons-934361)     </disk>
	I0422 16:57:33.647936   19497 main.go:141] libmachine: (addons-934361)     <interface type='network'>
	I0422 16:57:33.647947   19497 main.go:141] libmachine: (addons-934361)       <source network='mk-addons-934361'/>
	I0422 16:57:33.647960   19497 main.go:141] libmachine: (addons-934361)       <model type='virtio'/>
	I0422 16:57:33.647971   19497 main.go:141] libmachine: (addons-934361)     </interface>
	I0422 16:57:33.647983   19497 main.go:141] libmachine: (addons-934361)     <interface type='network'>
	I0422 16:57:33.647995   19497 main.go:141] libmachine: (addons-934361)       <source network='default'/>
	I0422 16:57:33.648006   19497 main.go:141] libmachine: (addons-934361)       <model type='virtio'/>
	I0422 16:57:33.648018   19497 main.go:141] libmachine: (addons-934361)     </interface>
	I0422 16:57:33.648026   19497 main.go:141] libmachine: (addons-934361)     <serial type='pty'>
	I0422 16:57:33.648034   19497 main.go:141] libmachine: (addons-934361)       <target port='0'/>
	I0422 16:57:33.648042   19497 main.go:141] libmachine: (addons-934361)     </serial>
	I0422 16:57:33.648047   19497 main.go:141] libmachine: (addons-934361)     <console type='pty'>
	I0422 16:57:33.648055   19497 main.go:141] libmachine: (addons-934361)       <target type='serial' port='0'/>
	I0422 16:57:33.648062   19497 main.go:141] libmachine: (addons-934361)     </console>
	I0422 16:57:33.648068   19497 main.go:141] libmachine: (addons-934361)     <rng model='virtio'>
	I0422 16:57:33.648076   19497 main.go:141] libmachine: (addons-934361)       <backend model='random'>/dev/random</backend>
	I0422 16:57:33.648083   19497 main.go:141] libmachine: (addons-934361)     </rng>
	I0422 16:57:33.648089   19497 main.go:141] libmachine: (addons-934361)     
	I0422 16:57:33.648117   19497 main.go:141] libmachine: (addons-934361)     
	I0422 16:57:33.648138   19497 main.go:141] libmachine: (addons-934361)   </devices>
	I0422 16:57:33.648151   19497 main.go:141] libmachine: (addons-934361) </domain>
	I0422 16:57:33.648160   19497 main.go:141] libmachine: (addons-934361) 
	I0422 16:57:33.654106   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:4c:a2:07 in network default
	I0422 16:57:33.654745   19497 main.go:141] libmachine: (addons-934361) Ensuring networks are active...
	I0422 16:57:33.654762   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:33.655480   19497 main.go:141] libmachine: (addons-934361) Ensuring network default is active
	I0422 16:57:33.655867   19497 main.go:141] libmachine: (addons-934361) Ensuring network mk-addons-934361 is active
	I0422 16:57:33.656457   19497 main.go:141] libmachine: (addons-934361) Getting domain xml...
	I0422 16:57:33.657064   19497 main.go:141] libmachine: (addons-934361) Creating domain...
	I0422 16:57:35.064105   19497 main.go:141] libmachine: (addons-934361) Waiting to get IP...
	I0422 16:57:35.064943   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.065402   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.065447   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.065371   19519 retry.go:31] will retry after 196.289335ms: waiting for machine to come up
	I0422 16:57:35.262878   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.263318   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.263351   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.263261   19519 retry.go:31] will retry after 329.965242ms: waiting for machine to come up
	I0422 16:57:35.594897   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.595516   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.595557   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.595471   19519 retry.go:31] will retry after 323.084257ms: waiting for machine to come up
	I0422 16:57:35.919988   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.920439   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.920461   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.920404   19519 retry.go:31] will retry after 530.948858ms: waiting for machine to come up
	I0422 16:57:36.453183   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:36.453601   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:36.453631   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:36.453540   19519 retry.go:31] will retry after 631.595219ms: waiting for machine to come up
	I0422 16:57:37.086388   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:37.086741   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:37.086767   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:37.086701   19519 retry.go:31] will retry after 816.177659ms: waiting for machine to come up
	I0422 16:57:37.904194   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:37.904562   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:37.904589   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:37.904521   19519 retry.go:31] will retry after 920.390325ms: waiting for machine to come up
	I0422 16:57:38.826553   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:38.826989   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:38.827034   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:38.826953   19519 retry.go:31] will retry after 1.134107914s: waiting for machine to come up
	I0422 16:57:39.963410   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:39.963825   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:39.963852   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:39.963778   19519 retry.go:31] will retry after 1.219492702s: waiting for machine to come up
	I0422 16:57:41.185380   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:41.185754   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:41.185776   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:41.185708   19519 retry.go:31] will retry after 1.58783081s: waiting for machine to come up
	I0422 16:57:42.775763   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:42.776237   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:42.776269   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:42.776191   19519 retry.go:31] will retry after 2.643870295s: waiting for machine to come up
	I0422 16:57:45.423145   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:45.423608   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:45.423638   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:45.423553   19519 retry.go:31] will retry after 2.886737467s: waiting for machine to come up
	I0422 16:57:48.312273   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:48.312843   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:48.312869   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:48.312792   19519 retry.go:31] will retry after 3.559179926s: waiting for machine to come up
	I0422 16:57:51.876561   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:51.877079   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:51.877103   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:51.877016   19519 retry.go:31] will retry after 4.672115704s: waiting for machine to come up
	I0422 16:57:56.555319   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.555818   19497 main.go:141] libmachine: (addons-934361) Found IP for machine: 192.168.39.135
	I0422 16:57:56.555842   19497 main.go:141] libmachine: (addons-934361) Reserving static IP address...
	I0422 16:57:56.555857   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has current primary IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.556164   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find host DHCP lease matching {name: "addons-934361", mac: "52:54:00:34:5f:36", ip: "192.168.39.135"} in network mk-addons-934361
	I0422 16:57:56.632283   19497 main.go:141] libmachine: (addons-934361) DBG | Getting to WaitForSSH function...
	I0422 16:57:56.632316   19497 main.go:141] libmachine: (addons-934361) Reserved static IP address: 192.168.39.135
	I0422 16:57:56.632337   19497 main.go:141] libmachine: (addons-934361) Waiting for SSH to be available...
	I0422 16:57:56.634699   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.635034   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:56.635062   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.635330   19497 main.go:141] libmachine: (addons-934361) DBG | Using SSH client type: external
	I0422 16:57:56.635352   19497 main.go:141] libmachine: (addons-934361) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa (-rw-------)
	I0422 16:57:56.635384   19497 main.go:141] libmachine: (addons-934361) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 16:57:56.635402   19497 main.go:141] libmachine: (addons-934361) DBG | About to run SSH command:
	I0422 16:57:56.635422   19497 main.go:141] libmachine: (addons-934361) DBG | exit 0
	I0422 16:57:56.772006   19497 main.go:141] libmachine: (addons-934361) DBG | SSH cmd err, output: <nil>: 
	I0422 16:57:56.772370   19497 main.go:141] libmachine: (addons-934361) KVM machine creation complete!
	I0422 16:57:56.772674   19497 main.go:141] libmachine: (addons-934361) Calling .GetConfigRaw
	I0422 16:57:56.773187   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:56.773436   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:56.773659   19497 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 16:57:56.773681   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:57:56.775180   19497 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 16:57:56.775198   19497 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 16:57:56.775206   19497 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 16:57:56.775214   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:56.777738   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.778089   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:56.778135   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.778248   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:56.778459   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.778644   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.778845   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:56.779028   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:56.779239   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:56.779252   19497 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 16:57:56.890411   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 16:57:56.890433   19497 main.go:141] libmachine: Detecting the provisioner...
	I0422 16:57:56.890441   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:56.892911   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.893187   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:56.893216   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.893354   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:56.893537   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.893697   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.893824   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:56.893962   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:56.894154   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:56.894167   19497 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 16:57:57.008025   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 16:57:57.008103   19497 main.go:141] libmachine: found compatible host: buildroot
	I0422 16:57:57.008118   19497 main.go:141] libmachine: Provisioning with buildroot...
	I0422 16:57:57.008130   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:57.008409   19497 buildroot.go:166] provisioning hostname "addons-934361"
	I0422 16:57:57.008434   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:57.008607   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.010808   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.011117   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.011155   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.011293   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.011482   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.011654   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.011790   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.011940   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:57.012116   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:57.012129   19497 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-934361 && echo "addons-934361" | sudo tee /etc/hostname
	I0422 16:57:57.138442   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-934361
	
	I0422 16:57:57.138469   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.140913   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.141175   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.141206   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.141357   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.141600   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.141820   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.141978   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.142135   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:57.142301   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:57.142323   19497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-934361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-934361/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-934361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 16:57:57.265945   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 16:57:57.265971   19497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 16:57:57.266003   19497 buildroot.go:174] setting up certificates
	I0422 16:57:57.266018   19497 provision.go:84] configureAuth start
	I0422 16:57:57.266030   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:57.266309   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:57.268905   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.269231   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.269262   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.269383   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.271340   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.271729   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.271759   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.271850   19497 provision.go:143] copyHostCerts
	I0422 16:57:57.271924   19497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 16:57:57.272087   19497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 16:57:57.272169   19497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 16:57:57.272217   19497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.addons-934361 san=[127.0.0.1 192.168.39.135 addons-934361 localhost minikube]
	I0422 16:57:57.434206   19497 provision.go:177] copyRemoteCerts
	I0422 16:57:57.434281   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 16:57:57.434305   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.437182   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.437549   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.437576   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.437763   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.437944   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.438097   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.438293   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:57.527105   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 16:57:57.553360   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 16:57:57.580930   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 16:57:57.606784   19497 provision.go:87] duration metric: took 340.753357ms to configureAuth
	I0422 16:57:57.606817   19497 buildroot.go:189] setting minikube options for container-runtime
	I0422 16:57:57.607047   19497 config.go:182] Loaded profile config "addons-934361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 16:57:57.607118   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.610035   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.610408   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.610438   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.610606   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.610812   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.610954   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.611071   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.611215   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:57.611368   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:57.611382   19497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 16:57:57.906243   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 16:57:57.906269   19497 main.go:141] libmachine: Checking connection to Docker...
	I0422 16:57:57.906277   19497 main.go:141] libmachine: (addons-934361) Calling .GetURL
	I0422 16:57:57.907607   19497 main.go:141] libmachine: (addons-934361) DBG | Using libvirt version 6000000
	I0422 16:57:57.909801   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.910133   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.910176   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.910336   19497 main.go:141] libmachine: Docker is up and running!
	I0422 16:57:57.910359   19497 main.go:141] libmachine: Reticulating splines...
	I0422 16:57:57.910446   19497 client.go:171] duration metric: took 24.906287596s to LocalClient.Create
	I0422 16:57:57.910477   19497 start.go:167] duration metric: took 24.906383498s to libmachine.API.Create "addons-934361"
	I0422 16:57:57.910495   19497 start.go:293] postStartSetup for "addons-934361" (driver="kvm2")
	I0422 16:57:57.910511   19497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 16:57:57.910536   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:57.910762   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 16:57:57.910784   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.912827   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.913165   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.913195   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.913309   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.913484   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.915346   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.915516   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:58.002575   19497 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 16:57:58.006932   19497 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 16:57:58.006956   19497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 16:57:58.007053   19497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 16:57:58.007089   19497 start.go:296] duration metric: took 96.583636ms for postStartSetup
	I0422 16:57:58.007147   19497 main.go:141] libmachine: (addons-934361) Calling .GetConfigRaw
	I0422 16:57:58.007638   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:58.009887   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.010260   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.010288   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.010507   19497 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/config.json ...
	I0422 16:57:58.010668   19497 start.go:128] duration metric: took 25.024900859s to createHost
	I0422 16:57:58.010689   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:58.012620   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.012910   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.012934   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.013027   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:58.013181   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.013293   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.013392   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:58.013573   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:58.013719   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:58.013730   19497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 16:57:58.128279   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713805078.111978304
	
	I0422 16:57:58.128312   19497 fix.go:216] guest clock: 1713805078.111978304
	I0422 16:57:58.128324   19497 fix.go:229] Guest: 2024-04-22 16:57:58.111978304 +0000 UTC Remote: 2024-04-22 16:57:58.010678611 +0000 UTC m=+25.143897313 (delta=101.299693ms)
	I0422 16:57:58.128353   19497 fix.go:200] guest clock delta is within tolerance: 101.299693ms
	I0422 16:57:58.128361   19497 start.go:83] releasing machines lock for "addons-934361", held for 25.142666377s
	I0422 16:57:58.128389   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.128668   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:58.131304   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.131683   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.131731   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.131892   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.132417   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.132620   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.132690   19497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 16:57:58.132734   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:58.132987   19497 ssh_runner.go:195] Run: cat /version.json
	I0422 16:57:58.133046   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:58.135755   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.135911   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.136185   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.136210   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.136269   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.136291   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.136446   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:58.136583   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:58.136660   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.136750   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.136819   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:58.136932   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:58.136979   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:58.137078   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:58.265754   19497 ssh_runner.go:195] Run: systemctl --version
	I0422 16:57:58.272127   19497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 16:57:58.438948   19497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 16:57:58.445801   19497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 16:57:58.445886   19497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 16:57:58.462759   19497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 16:57:58.462807   19497 start.go:494] detecting cgroup driver to use...
	I0422 16:57:58.462865   19497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 16:57:58.482589   19497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 16:57:58.496926   19497 docker.go:217] disabling cri-docker service (if available) ...
	I0422 16:57:58.496993   19497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 16:57:58.510534   19497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 16:57:58.524640   19497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 16:57:58.643412   19497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 16:57:58.778594   19497 docker.go:233] disabling docker service ...
	I0422 16:57:58.778671   19497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 16:57:58.794473   19497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 16:57:58.807555   19497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 16:57:58.946799   19497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 16:57:59.088635   19497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 16:57:59.103507   19497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 16:57:59.123395   19497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 16:57:59.123462   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.134619   19497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 16:57:59.134693   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.146372   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.158324   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.170135   19497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 16:57:59.182367   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.194501   19497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.214253   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.227940   19497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 16:57:59.240245   19497 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 16:57:59.240303   19497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 16:57:59.258497   19497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 16:57:59.270917   19497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:59.422933   19497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 16:57:59.570460   19497 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 16:57:59.570542   19497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 16:57:59.576139   19497 start.go:562] Will wait 60s for crictl version
	I0422 16:57:59.576230   19497 ssh_runner.go:195] Run: which crictl
	I0422 16:57:59.580266   19497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 16:57:59.614859   19497 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 16:57:59.614990   19497 ssh_runner.go:195] Run: crio --version
	I0422 16:57:59.644280   19497 ssh_runner.go:195] Run: crio --version
	I0422 16:57:59.676250   19497 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 16:57:59.677940   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:59.680787   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:59.681101   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:59.681133   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:59.681326   19497 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 16:57:59.685758   19497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 16:57:59.698946   19497 kubeadm.go:877] updating cluster {Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 16:57:59.699046   19497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 16:57:59.699084   19497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 16:57:59.737065   19497 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 16:57:59.737130   19497 ssh_runner.go:195] Run: which lz4
	I0422 16:57:59.741558   19497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 16:57:59.746192   19497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 16:57:59.746232   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 16:58:01.202590   19497 crio.go:462] duration metric: took 1.461060178s to copy over tarball
	I0422 16:58:01.202666   19497 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 16:58:03.638220   19497 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.435508535s)
	I0422 16:58:03.638258   19497 crio.go:469] duration metric: took 2.435643243s to extract the tarball
	I0422 16:58:03.638265   19497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 16:58:03.675619   19497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 16:58:03.721783   19497 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 16:58:03.721812   19497 cache_images.go:84] Images are preloaded, skipping loading
	I0422 16:58:03.721821   19497 kubeadm.go:928] updating node { 192.168.39.135 8443 v1.30.0 crio true true} ...
	I0422 16:58:03.721938   19497 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-934361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 16:58:03.722008   19497 ssh_runner.go:195] Run: crio config
	I0422 16:58:03.770282   19497 cni.go:84] Creating CNI manager for ""
	I0422 16:58:03.770315   19497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:58:03.770340   19497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 16:58:03.770363   19497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-934361 NodeName:addons-934361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 16:58:03.770501   19497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-934361"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 16:58:03.770570   19497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 16:58:03.781382   19497 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 16:58:03.781456   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 16:58:03.791904   19497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0422 16:58:03.810753   19497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 16:58:03.830138   19497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0422 16:58:03.851812   19497 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I0422 16:58:03.855935   19497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 16:58:03.869134   19497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:58:04.004126   19497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 16:58:04.023200   19497 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361 for IP: 192.168.39.135
	I0422 16:58:04.023232   19497 certs.go:194] generating shared ca certs ...
	I0422 16:58:04.023260   19497 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.023423   19497 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 16:58:04.169771   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt ...
	I0422 16:58:04.169799   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt: {Name:mk733199a2acd8a83bf9ab3c6df11b5053cc823a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.169976   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key ...
	I0422 16:58:04.169990   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key: {Name:mkb06c6d181caf61810be6bbb1655d5e3186dd47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.170061   19497 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 16:58:04.440848   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt ...
	I0422 16:58:04.440884   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt: {Name:mka8fab8fb90853c7953652d2abd820aa5f16fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.441032   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key ...
	I0422 16:58:04.441044   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key: {Name:mke3e87318f9834e9111317ba9236faea4c0aa13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.441110   19497 certs.go:256] generating profile certs ...
	I0422 16:58:04.441162   19497 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.key
	I0422 16:58:04.441174   19497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt with IP's: []
	I0422 16:58:04.685380   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt ...
	I0422 16:58:04.685413   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: {Name:mk7aaf8ea5151f336baa6fd63646eb55b3c1f3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.685575   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.key ...
	I0422 16:58:04.685586   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.key: {Name:mk1d89d43202fa99577c60b6a234332203615819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.685647   19497 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e
	I0422 16:58:04.685664   19497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.135]
	I0422 16:58:04.813975   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e ...
	I0422 16:58:04.814012   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e: {Name:mkb421f1c658ace3f5a29849fbe6303faea94ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.814173   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e ...
	I0422 16:58:04.814189   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e: {Name:mk56953efdb0d28ca9acd9032a390bf1a26f1f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.814253   19497 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt
	I0422 16:58:04.814347   19497 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key
	I0422 16:58:04.814395   19497 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key
	I0422 16:58:04.814414   19497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt with IP's: []
	I0422 16:58:05.054913   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt ...
	I0422 16:58:05.054945   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt: {Name:mke240646e716d28e8ed5c155cf66f0d7b90a640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:05.055105   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key ...
	I0422 16:58:05.055117   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key: {Name:mke4d7b005de65f852f8891254497ff22e8f52e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:05.055295   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 16:58:05.055330   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 16:58:05.055357   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 16:58:05.055377   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 16:58:05.055970   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 16:58:05.084644   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 16:58:05.111750   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 16:58:05.137857   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 16:58:05.163550   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 16:58:05.189830   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 16:58:05.216714   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 16:58:05.242797   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 16:58:05.268849   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 16:58:05.295030   19497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 16:58:05.313884   19497 ssh_runner.go:195] Run: openssl version
	I0422 16:58:05.320178   19497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 16:58:05.333133   19497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:58:05.338116   19497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:58:05.338178   19497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:58:05.344694   19497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 16:58:05.356791   19497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 16:58:05.361082   19497 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 16:58:05.361130   19497 kubeadm.go:391] StartCluster: {Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:58:05.361197   19497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 16:58:05.361242   19497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 16:58:05.403060   19497 cri.go:89] found id: ""
	I0422 16:58:05.403153   19497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 16:58:05.416874   19497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 16:58:05.438650   19497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 16:58:05.455170   19497 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 16:58:05.455191   19497 kubeadm.go:156] found existing configuration files:
	
	I0422 16:58:05.455235   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 16:58:05.467544   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 16:58:05.467619   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 16:58:05.487567   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 16:58:05.497552   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 16:58:05.497616   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 16:58:05.507989   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 16:58:05.517784   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 16:58:05.517839   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 16:58:05.528132   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 16:58:05.538030   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 16:58:05.538096   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 16:58:05.548704   19497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 16:58:05.722346   19497 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 16:58:16.196199   19497 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 16:58:16.196342   19497 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 16:58:16.196445   19497 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 16:58:16.196531   19497 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 16:58:16.196645   19497 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 16:58:16.196732   19497 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 16:58:16.198434   19497 out.go:204]   - Generating certificates and keys ...
	I0422 16:58:16.198517   19497 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 16:58:16.198573   19497 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 16:58:16.198639   19497 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 16:58:16.198698   19497 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 16:58:16.198754   19497 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 16:58:16.198797   19497 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 16:58:16.198890   19497 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 16:58:16.199029   19497 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-934361 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0422 16:58:16.199086   19497 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 16:58:16.199237   19497 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-934361 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0422 16:58:16.199323   19497 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 16:58:16.199415   19497 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 16:58:16.199482   19497 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 16:58:16.199559   19497 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 16:58:16.199637   19497 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 16:58:16.199714   19497 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 16:58:16.199797   19497 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 16:58:16.199898   19497 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 16:58:16.200011   19497 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 16:58:16.200137   19497 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 16:58:16.200229   19497 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 16:58:16.202026   19497 out.go:204]   - Booting up control plane ...
	I0422 16:58:16.202149   19497 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 16:58:16.202244   19497 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 16:58:16.202342   19497 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 16:58:16.202475   19497 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 16:58:16.202588   19497 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 16:58:16.202666   19497 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 16:58:16.202818   19497 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 16:58:16.202907   19497 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 16:58:16.202998   19497 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.057478ms
	I0422 16:58:16.203103   19497 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 16:58:16.203200   19497 kubeadm.go:309] [api-check] The API server is healthy after 5.001477785s
	I0422 16:58:16.203356   19497 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 16:58:16.203511   19497 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 16:58:16.203594   19497 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 16:58:16.203825   19497 kubeadm.go:309] [mark-control-plane] Marking the node addons-934361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 16:58:16.203906   19497 kubeadm.go:309] [bootstrap-token] Using token: umwtaq.jexr9s59rhfkzxx5
	I0422 16:58:16.205516   19497 out.go:204]   - Configuring RBAC rules ...
	I0422 16:58:16.205668   19497 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 16:58:16.205751   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 16:58:16.205905   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 16:58:16.206054   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 16:58:16.206188   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 16:58:16.206311   19497 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 16:58:16.206465   19497 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 16:58:16.206528   19497 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 16:58:16.206602   19497 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 16:58:16.206610   19497 kubeadm.go:309] 
	I0422 16:58:16.206691   19497 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 16:58:16.206700   19497 kubeadm.go:309] 
	I0422 16:58:16.206805   19497 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 16:58:16.206813   19497 kubeadm.go:309] 
	I0422 16:58:16.206858   19497 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 16:58:16.206948   19497 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 16:58:16.207021   19497 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 16:58:16.207030   19497 kubeadm.go:309] 
	I0422 16:58:16.207109   19497 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 16:58:16.207117   19497 kubeadm.go:309] 
	I0422 16:58:16.207207   19497 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 16:58:16.207216   19497 kubeadm.go:309] 
	I0422 16:58:16.207289   19497 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 16:58:16.207407   19497 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 16:58:16.207505   19497 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 16:58:16.207523   19497 kubeadm.go:309] 
	I0422 16:58:16.207642   19497 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 16:58:16.207746   19497 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 16:58:16.207755   19497 kubeadm.go:309] 
	I0422 16:58:16.207867   19497 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token umwtaq.jexr9s59rhfkzxx5 \
	I0422 16:58:16.208024   19497 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 16:58:16.208058   19497 kubeadm.go:309] 	--control-plane 
	I0422 16:58:16.208067   19497 kubeadm.go:309] 
	I0422 16:58:16.208192   19497 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 16:58:16.208207   19497 kubeadm.go:309] 
	I0422 16:58:16.208283   19497 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token umwtaq.jexr9s59rhfkzxx5 \
	I0422 16:58:16.208393   19497 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 16:58:16.208407   19497 cni.go:84] Creating CNI manager for ""
	I0422 16:58:16.208420   19497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:58:16.210095   19497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 16:58:16.211364   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 16:58:16.225429   19497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 16:58:16.246994   19497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 16:58:16.247077   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:16.247111   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-934361 minikube.k8s.io/updated_at=2024_04_22T16_58_16_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=addons-934361 minikube.k8s.io/primary=true
	I0422 16:58:16.283016   19497 ops.go:34] apiserver oom_adj: -16
	I0422 16:58:16.402894   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:16.903825   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:17.403008   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:17.903778   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:18.403481   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:18.903898   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:19.403618   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:19.903105   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:20.403758   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:20.903842   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:21.403746   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:21.903760   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.403521   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.903667   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:23.403588   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:23.903636   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:24.403082   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:24.903653   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:25.402990   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:25.903142   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:26.403580   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:26.902897   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:27.403484   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:27.903633   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:28.402935   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:28.902981   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:29.403676   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:29.617693   19497 kubeadm.go:1107] duration metric: took 13.370678652s to wait for elevateKubeSystemPrivileges
	W0422 16:58:29.617726   19497 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 16:58:29.617734   19497 kubeadm.go:393] duration metric: took 24.256607098s to StartCluster
	I0422 16:58:29.617754   19497 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:29.617864   19497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 16:58:29.618244   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:29.618453   19497 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 16:58:29.620700   19497 out.go:177] * Verifying Kubernetes components...
	I0422 16:58:29.618485   19497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 16:58:29.618499   19497 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0422 16:58:29.618689   19497 config.go:182] Loaded profile config "addons-934361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 16:58:29.620802   19497 addons.go:69] Setting cloud-spanner=true in profile "addons-934361"
	I0422 16:58:29.620823   19497 addons.go:69] Setting yakd=true in profile "addons-934361"
	I0422 16:58:29.620846   19497 addons.go:234] Setting addon yakd=true in "addons-934361"
	I0422 16:58:29.620849   19497 addons.go:234] Setting addon cloud-spanner=true in "addons-934361"
	I0422 16:58:29.620878   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620879   19497 addons.go:69] Setting registry=true in profile "addons-934361"
	I0422 16:58:29.620884   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620900   19497 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-934361"
	I0422 16:58:29.620899   19497 addons.go:234] Setting addon registry=true in "addons-934361"
	I0422 16:58:29.620927   19497 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-934361"
	I0422 16:58:29.620938   19497 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-934361"
	I0422 16:58:29.620938   19497 addons.go:69] Setting metrics-server=true in profile "addons-934361"
	I0422 16:58:29.620957   19497 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-934361"
	I0422 16:58:29.620978   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620989   19497 addons.go:69] Setting inspektor-gadget=true in profile "addons-934361"
	I0422 16:58:29.621005   19497 addons.go:234] Setting addon inspektor-gadget=true in "addons-934361"
	I0422 16:58:29.621036   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621323   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621393   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621409   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.620929   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621456   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621479   19497 addons.go:69] Setting ingress-dns=true in profile "addons-934361"
	I0422 16:58:29.621529   19497 addons.go:234] Setting addon ingress-dns=true in "addons-934361"
	I0422 16:58:29.621576   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620979   19497 addons.go:234] Setting addon metrics-server=true in "addons-934361"
	I0422 16:58:29.621649   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621334   19497 addons.go:69] Setting volumesnapshots=true in profile "addons-934361"
	I0422 16:58:29.621755   19497 addons.go:234] Setting addon volumesnapshots=true in "addons-934361"
	I0422 16:58:29.621780   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621787   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621339   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621348   19497 addons.go:69] Setting gcp-auth=true in profile "addons-934361"
	I0422 16:58:29.623709   19497 mustload.go:65] Loading cluster: addons-934361
	I0422 16:58:29.621351   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621354   19497 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-934361"
	I0422 16:58:29.621352   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621358   19497 addons.go:69] Setting default-storageclass=true in profile "addons-934361"
	I0422 16:58:29.623897   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623903   19497 config.go:182] Loaded profile config "addons-934361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 16:58:29.623922   19497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-934361"
	I0422 16:58:29.621362   19497 addons.go:69] Setting storage-provisioner=true in profile "addons-934361"
	I0422 16:58:29.623991   19497 addons.go:234] Setting addon storage-provisioner=true in "addons-934361"
	I0422 16:58:29.624020   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.624249   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624305   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621368   19497 addons.go:69] Setting ingress=true in profile "addons-934361"
	I0422 16:58:29.624359   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624375   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624392   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.624395   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.624369   19497 addons.go:234] Setting addon ingress=true in "addons-934361"
	I0422 16:58:29.621848   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621946   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624517   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621976   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624571   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.622134   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624620   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623629   19497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:58:29.624655   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.623661   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623785   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623872   19497 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-934361"
	I0422 16:58:29.621367   19497 addons.go:69] Setting helm-tiller=true in profile "addons-934361"
	I0422 16:58:29.624893   19497 addons.go:234] Setting addon helm-tiller=true in "addons-934361"
	I0422 16:58:29.624933   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.625025   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.625042   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.625061   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.625258   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.625289   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.625361   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.625391   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.642294   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I0422 16:58:29.642879   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.642953   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33877
	I0422 16:58:29.643504   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.643521   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.643592   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.644034   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.644112   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.644130   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.644479   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.644511   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.645021   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.645535   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.645572   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.661968   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36307
	I0422 16:58:29.662504   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.663067   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.663100   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.663462   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.664068   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.664106   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.665271   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I0422 16:58:29.665726   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0422 16:58:29.665899   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.666223   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.666530   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.666545   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.666888   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.667410   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I0422 16:58:29.667453   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.667474   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.667665   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.667681   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.668064   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.668579   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.668596   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.668646   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.668845   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.669564   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I0422 16:58:29.669933   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.670014   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.670456   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.670473   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.670825   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.670838   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.670867   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.671035   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.675533   19497 addons.go:234] Setting addon default-storageclass=true in "addons-934361"
	I0422 16:58:29.675575   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.675816   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.675837   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.675533   19497 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-934361"
	I0422 16:58:29.675914   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.676247   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.676289   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.687851   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I0422 16:58:29.688976   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 16:58:29.689138   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0422 16:58:29.689583   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.689669   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.690224   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.690244   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.690650   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.691230   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.691271   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.691469   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0422 16:58:29.691618   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.691846   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.691861   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.692193   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.692348   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.692359   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.692406   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.692842   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.692858   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.692912   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.693307   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.693337   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.693831   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.693850   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.694477   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.694661   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.696620   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.697001   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.697020   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.697195   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34535
	I0422 16:58:29.697803   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.697879   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0422 16:58:29.698216   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.698394   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.698405   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.698603   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.698620   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.698802   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.698889   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.699334   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.699364   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.699820   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.699845   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.705293   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0422 16:58:29.705854   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.706432   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.706452   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.706828   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.707412   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.707448   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.711744   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0422 16:58:29.712765   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.713257   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.713274   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.713652   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.713796   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.715846   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.718977   19497 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0422 16:58:29.717493   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0422 16:58:29.717908   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0422 16:58:29.720150   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0422 16:58:29.721316   19497 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 16:58:29.721334   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0422 16:58:29.721354   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.721772   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.721868   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.722177   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.722560   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.722575   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.722599   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.722615   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.722989   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.723003   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.723285   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.723357   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.723394   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.723927   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.723968   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.724426   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0422 16:58:29.724554   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.725085   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.725250   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.725757   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.725773   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.726123   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.726683   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.726723   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.726946   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0422 16:58:29.727437   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.727530   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.727768   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.729407   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0422 16:58:29.728241   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.728274   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.728537   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.728870   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.729885   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0422 16:58:29.730971   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0422 16:58:29.730982   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0422 16:58:29.730998   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.731044   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.731084   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.732851   19497 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0422 16:58:29.731504   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.731787   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.731871   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.733173   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0422 16:58:29.734432   19497 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0422 16:58:29.734465   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0422 16:58:29.734484   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.735191   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.735249   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.735307   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.735329   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.735346   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.735607   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.736006   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0422 16:58:29.736166   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.736366   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.736389   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.736742   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.736771   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.736983   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.737352   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.737368   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.737408   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.739110   19497 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0422 16:58:29.737754   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.738066   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.738090   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.738650   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.739430   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.740100   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.740653   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.740727   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.740733   19497 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0422 16:58:29.740748   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.740748   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0422 16:58:29.740767   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.741466   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.743017   19497 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0422 16:58:29.741524   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.741543   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.741666   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.744050   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0422 16:58:29.744609   19497 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 16:58:29.744621   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0422 16:58:29.744638   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.745323   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.745339   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.745417   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.745597   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.745665   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.745909   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.746655   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.746674   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.747045   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.747067   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.747504   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.747543   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.747732   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.747753   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.747772   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.749417   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0422 16:58:29.747842   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.748799   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.749748   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.752768   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0422 16:58:29.754205   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0422 16:58:29.752123   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41731
	I0422 16:58:29.752152   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.752180   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.752258   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.754388   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.754520   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.754864   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.755979   19497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 16:58:29.756177   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.759095   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0422 16:58:29.757493   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.757544   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.758135   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0422 16:58:29.758272   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.761979   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0422 16:58:29.760779   19497 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 16:58:29.760849   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.761005   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.761414   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.764599   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0422 16:58:29.763593   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 16:58:29.763735   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.763972   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.767137   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0422 16:58:29.765961   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.765989   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.766071   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.766575   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I0422 16:58:29.770547   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0422 16:58:29.769375   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.769431   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.770970   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0422 16:58:29.771011   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.771342   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0422 16:58:29.771623   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36563
	I0422 16:58:29.772241   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0422 16:58:29.772254   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0422 16:58:29.772274   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.772546   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.774169   19497 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0422 16:58:29.772957   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.773230   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.773283   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.773308   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.773432   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.774114   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.774348   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.775680   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.775683   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.775735   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.777072   19497 out.go:177]   - Using image docker.io/busybox:stable
	I0422 16:58:29.775884   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.775969   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.776487   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.776488   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.776523   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.776584   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.776585   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.776834   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0422 16:58:29.778440   19497 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 16:58:29.778589   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.779691   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.779727   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.779735   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:29.779819   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.779829   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.779842   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0422 16:58:29.779940   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.779929   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.780687   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.781436   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:29.780690   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.780698   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.780717   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.781468   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.781503   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.781812   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.781828   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.781986   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.782869   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0422 16:58:29.783699   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.783710   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0422 16:58:29.784674   19497 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 16:58:29.783724   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.784688   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0422 16:58:29.784704   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.783833   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.783736   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.784733   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.785182   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.785204   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.785412   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.785417   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.787847   19497 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0422 16:58:29.786371   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.787299   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.788981   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.789205   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.789205   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 16:58:29.789245   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 16:58:29.789259   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.789564   19497 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 16:58:29.790917   19497 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0422 16:58:29.789580   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 16:58:29.789849   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.792375   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.792423   19497 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0422 16:58:29.792434   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792442   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0422 16:58:29.792458   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.790995   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.791199   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.791720   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792561   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.792563   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.792588   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.792619   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792623   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792705   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.792760   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792803   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.793720   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.794037   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.794239   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.794257   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.794307   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.794465   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.794849   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.795498   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.795790   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.796286   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.796307   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.796461   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.796510   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.796589   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.796754   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.796901   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.796957   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.797082   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.797489   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.797588   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.799372   19497 out.go:177]   - Using image docker.io/registry:2.8.3
	I0422 16:58:29.797864   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.797896   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.798433   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.800889   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.802375   19497 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0422 16:58:29.801059   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	W0422 16:58:29.801819   19497 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48392->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.803875   19497 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0422 16:58:29.804040   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.805184   19497 retry.go:31] will retry after 301.589206ms: ssh: handshake failed: read tcp 192.168.39.1:48392->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.805192   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0422 16:58:29.805218   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.805220   19497 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0422 16:58:29.806494   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0422 16:58:29.806512   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0422 16:58:29.806532   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	W0422 16:58:29.806563   19497 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48398->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.806584   19497 retry.go:31] will retry after 328.556573ms: ssh: handshake failed: read tcp 192.168.39.1:48398->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.809104   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809350   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809674   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.809689   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.809694   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809706   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809770   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.809867   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.809912   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.810016   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.810055   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.810136   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.810242   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.810374   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:30.030883   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0422 16:58:30.030901   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0422 16:58:30.080374   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0422 16:58:30.104428   19497 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0422 16:58:30.104448   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0422 16:58:30.109205   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 16:58:30.111929   19497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 16:58:30.112008   19497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 16:58:30.152063   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 16:58:30.154670   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0422 16:58:30.154694   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0422 16:58:30.158940   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 16:58:30.161825   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 16:58:30.172599   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 16:58:30.222046   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 16:58:30.222066   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0422 16:58:30.235902   19497 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0422 16:58:30.235933   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0422 16:58:30.244913   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0422 16:58:30.244939   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0422 16:58:30.251006   19497 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0422 16:58:30.251029   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0422 16:58:30.260120   19497 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0422 16:58:30.260138   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0422 16:58:30.286380   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0422 16:58:30.286405   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0422 16:58:30.391851   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 16:58:30.391876   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 16:58:30.438070   19497 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0422 16:58:30.438093   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0422 16:58:30.458038   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0422 16:58:30.458060   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0422 16:58:30.471625   19497 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0422 16:58:30.471654   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0422 16:58:30.504997   19497 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0422 16:58:30.505022   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0422 16:58:30.523323   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0422 16:58:30.523349   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0422 16:58:30.627429   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 16:58:30.627462   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 16:58:30.640646   19497 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0422 16:58:30.640667   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0422 16:58:30.691388   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0422 16:58:30.709458   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0422 16:58:30.749416   19497 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0422 16:58:30.749442   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0422 16:58:30.755622   19497 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0422 16:58:30.755643   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0422 16:58:30.774957   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0422 16:58:30.774978   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0422 16:58:30.785661   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 16:58:30.833245   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 16:58:30.849630   19497 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 16:58:30.849656   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0422 16:58:31.029060   19497 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0422 16:58:31.029084   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0422 16:58:31.071093   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0422 16:58:31.071114   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0422 16:58:31.081833   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0422 16:58:31.081855   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0422 16:58:31.135074   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 16:58:31.299436   19497 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0422 16:58:31.299471   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0422 16:58:31.364017   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0422 16:58:31.364044   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0422 16:58:31.525084   19497 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:31.525120   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0422 16:58:31.576828   19497 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 16:58:31.576857   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0422 16:58:31.637259   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0422 16:58:31.637285   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0422 16:58:31.717188   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:31.760673   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0422 16:58:31.760701   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0422 16:58:32.014043   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 16:58:32.111510   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0422 16:58:32.111550   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0422 16:58:32.628933   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0422 16:58:32.628967   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0422 16:58:32.801705   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.721300301s)
	I0422 16:58:32.801752   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:32.801763   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:32.802115   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:32.802137   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:32.802164   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:32.802183   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:32.802195   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:32.802420   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:32.802468   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:32.815188   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 16:58:32.815214   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0422 16:58:33.034108   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 16:58:33.435806   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.326567229s)
	I0422 16:58:33.435856   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:33.435871   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:33.435877   19497 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.323926134s)
	I0422 16:58:33.435853   19497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.323801126s)
	I0422 16:58:33.435931   19497 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 16:58:33.436247   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:33.436285   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:33.436296   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:33.436317   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:33.436325   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:33.437383   19497 node_ready.go:35] waiting up to 6m0s for node "addons-934361" to be "Ready" ...
	I0422 16:58:33.437523   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:33.437572   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:33.437545   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:33.447475   19497 node_ready.go:49] node "addons-934361" has status "Ready":"True"
	I0422 16:58:33.447499   19497 node_ready.go:38] duration metric: took 10.089638ms for node "addons-934361" to be "Ready" ...
	I0422 16:58:33.447507   19497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 16:58:33.486659   19497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:33.960145   19497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-934361" context rescaled to 1 replicas
	I0422 16:58:34.516937   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.364837669s)
	I0422 16:58:34.516992   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517006   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.516948   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.357978105s)
	I0422 16:58:34.517114   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517132   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.517275   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517358   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:34.517375   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517386   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.517338   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:34.517449   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517457   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:34.517462   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:34.517507   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517516   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.517740   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517758   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:34.517807   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:34.517911   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517931   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:35.710783   19497 pod_ready.go:102] pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:36.740759   19497 pod_ready.go:92] pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:36.740785   19497 pod_ready.go:81] duration metric: took 3.254095904s for pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:36.740798   19497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:36.846184   19497 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0422 16:58:36.846227   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:36.849903   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:36.850345   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:36.850377   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:36.850519   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:36.850727   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:36.850913   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:36.851114   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:37.451630   19497 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0422 16:58:37.708620   19497 addons.go:234] Setting addon gcp-auth=true in "addons-934361"
	I0422 16:58:37.708691   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:37.709134   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:37.709175   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:37.724325   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I0422 16:58:37.724808   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:37.725323   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:37.725347   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:37.725813   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:37.726462   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:37.726501   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:37.742600   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0422 16:58:37.743058   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:37.743582   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:37.743606   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:37.743941   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:37.744231   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:37.746082   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:37.746339   19497 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0422 16:58:37.746363   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:37.749201   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:37.749615   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:37.749646   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:37.749763   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:37.749961   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:37.750117   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:37.750248   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:37.752692   19497 pod_ready.go:97] pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:37 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.135 HostIPs:[{IP:192.168.39.135}] PodIP: PodIPs:[]
StartTime:2024-04-22 16:58:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-04-22 16:58:35 +0000 UTC,FinishedAt:2024-04-22 16:58:35 +0000 UTC,ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f Started:0xc00216768c AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0422 16:58:37.752726   19497 pod_ready.go:81] duration metric: took 1.011920899s for pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace to be "Ready" ...
	E0422 16:58:37.752741   19497 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:37 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.135 HostIPs:[{IP:192.1
68.39.135}] PodIP: PodIPs:[] StartTime:2024-04-22 16:58:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-04-22 16:58:35 +0000 UTC,FinishedAt:2024-04-22 16:58:35 +0000 UTC,ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f Started:0xc00216768c AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0422 16:58:37.752751   19497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.761328   19497 pod_ready.go:92] pod "etcd-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.761361   19497 pod_ready.go:81] duration metric: took 8.597045ms for pod "etcd-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.761377   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.767965   19497 pod_ready.go:92] pod "kube-apiserver-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.767986   19497 pod_ready.go:81] duration metric: took 6.600972ms for pod "kube-apiserver-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.767995   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.778550   19497 pod_ready.go:92] pod "kube-controller-manager-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.778573   19497 pod_ready.go:81] duration metric: took 10.572069ms for pod "kube-controller-manager-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.778586   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbd87" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.785325   19497 pod_ready.go:92] pod "kube-proxy-zbd87" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.785348   19497 pod_ready.go:81] duration metric: took 6.756331ms for pod "kube-proxy-zbd87" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.785358   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:38.147267   19497 pod_ready.go:92] pod "kube-scheduler-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:38.147293   19497 pod_ready.go:81] duration metric: took 361.929087ms for pod "kube-scheduler-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:38.147303   19497 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:39.006061   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.844206343s)
	I0422 16:58:39.006120   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006118   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.833488304s)
	I0422 16:58:39.006133   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006158   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006179   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006205   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.314757591s)
	I0422 16:58:39.006231   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.296734805s)
	I0422 16:58:39.006238   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006251   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006249   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006262   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006382   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.173106073s)
	I0422 16:58:39.006406   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006415   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006490   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.871380694s)
	I0422 16:58:39.006515   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006525   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006637   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006680   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006669   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006688   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006694   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006696   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006703   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006705   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006714   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006845   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006872   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006893   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006899   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006907   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006914   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006956   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006975   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006981   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006989   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006996   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.007204   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.007228   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.007234   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.007243   19497 addons.go:470] Verifying addon metrics-server=true in "addons-934361"
	I0422 16:58:39.007283   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.007292   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.007300   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.007306   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.007351   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.007357   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.007364   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.007371   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.008721   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.008749   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.008755   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.008764   19497 addons.go:470] Verifying addon ingress=true in "addons-934361"
	I0422 16:58:39.010700   19497 out.go:177] * Verifying ingress addon...
	I0422 16:58:39.008894   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.008923   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.008954   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.008969   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.008985   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.009028   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.009042   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.22060972s)
	I0422 16:58:39.009051   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.009072   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.010750   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.010791   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.012442   19497 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-934361 service yakd-dashboard -n yakd-dashboard
	
	I0422 16:58:39.010806   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.010813   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.010824   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.012924   19497 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0422 16:58:39.014041   19497 addons.go:470] Verifying addon registry=true in "addons-934361"
	I0422 16:58:39.014090   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.015645   19497 out.go:177] * Verifying registry addon...
	I0422 16:58:39.014351   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.015680   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.015697   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.014375   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.015713   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.016035   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.016055   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.017270   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.017840   19497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0422 16:58:39.027808   19497 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0422 16:58:39.027832   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:39.037067   19497 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0422 16:58:39.037088   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:39.065421   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.065438   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.065889   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.065927   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.065953   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	W0422 16:58:39.066046   19497 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0422 16:58:39.081038   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.081061   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.081400   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.081414   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.081429   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.640868   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:39.641565   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.031701   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.038372   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:40.068672   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.054584148s)
	I0422 16:58:40.068729   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.068742   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.068813   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.351558932s)
	W0422 16:58:40.068877   19497 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 16:58:40.068908   19497 retry.go:31] will retry after 200.447904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 16:58:40.069082   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.069100   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.069116   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.069118   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.069128   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.069395   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.069429   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.069436   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.205606   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:40.269748   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:40.549711   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.557703   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:40.815326   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.781169496s)
	I0422 16:58:40.815365   19497 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.069010233s)
	I0422 16:58:40.815373   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.815385   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.817182   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:40.815707   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.815749   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.818594   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.818607   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.818616   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.820477   19497 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0422 16:58:40.818881   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.818884   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.822519   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.822532   19497 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-934361"
	I0422 16:58:40.822560   19497 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0422 16:58:40.822576   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0422 16:58:40.824418   19497 out.go:177] * Verifying csi-hostpath-driver addon...
	I0422 16:58:40.826307   19497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0422 16:58:40.851943   19497 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0422 16:58:40.851966   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:41.000187   19497 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0422 16:58:41.000209   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0422 16:58:41.018436   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:41.024696   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:41.099173   19497 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 16:58:41.099198   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0422 16:58:41.178966   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 16:58:41.331872   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:41.518380   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:41.547479   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:41.836955   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:42.019270   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:42.023377   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:42.338024   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:42.522155   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:42.528992   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:42.655971   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:42.832670   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.018695   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:43.023084   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:43.107299   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.83749728s)
	I0422 16:58:43.107345   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.107360   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.107670   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.107731   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.107745   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.107754   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.107687   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:43.107993   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.108012   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.108023   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:43.347000   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.444263   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.265255283s)
	I0422 16:58:43.444325   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.444342   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.444698   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.444742   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.444755   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.444764   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.444988   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.445012   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.447111   19497 addons.go:470] Verifying addon gcp-auth=true in "addons-934361"
	I0422 16:58:43.448925   19497 out.go:177] * Verifying gcp-auth addon...
	I0422 16:58:43.450935   19497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0422 16:58:43.483068   19497 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0422 16:58:43.483089   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:43.525773   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:43.526704   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:43.832651   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.955657   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:44.019263   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:44.022929   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:44.333026   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:44.455010   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:44.519769   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:44.522468   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:44.656170   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:44.832954   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:44.954724   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:45.018677   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:45.022819   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:45.333106   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:45.454153   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:45.521837   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:45.528196   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:45.832605   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:45.955231   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:46.020512   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:46.023459   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:46.331515   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:46.455366   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:46.521604   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:46.525546   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:46.667781   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:46.937303   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:46.955241   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:47.020261   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:47.024125   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:47.332558   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:47.455208   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:47.519659   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:47.522186   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:47.832715   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:47.955946   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:48.021590   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:48.027599   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:48.335144   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:48.455451   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:48.520352   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:48.527448   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:48.837470   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:48.954347   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:49.018433   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:49.022333   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:49.153774   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:49.332701   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:49.454701   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:49.519232   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:49.522336   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:49.832483   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:49.955422   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:50.018918   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:50.021828   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:50.332711   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:50.455112   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:50.520253   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:50.523221   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:50.832226   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:50.954877   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:51.019323   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:51.021996   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:51.153813   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:51.332486   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:51.455588   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:51.519163   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:51.522231   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:51.838073   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:51.955018   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:52.021372   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:52.023681   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:52.333391   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:52.455290   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:52.519950   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:52.526301   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:52.832636   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:52.955324   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:53.019394   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:53.022130   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:53.154480   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:53.332285   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:53.455565   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:53.519248   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:53.522072   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:53.833019   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:53.954557   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:54.018700   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:54.022538   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:54.332099   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:54.453918   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:54.519334   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:54.522244   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:54.833090   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:54.954870   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:55.019029   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:55.022643   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:55.156910   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:55.332593   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:55.455106   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:55.519629   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:55.527103   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:55.831686   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.144651   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:56.147604   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:56.147695   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:56.333178   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.455272   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:56.518870   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:56.523177   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:56.832473   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.954883   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:57.018948   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:57.022803   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:57.332556   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:57.456282   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:57.518129   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:57.521904   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:57.653734   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:57.836398   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:57.955576   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:58.018770   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:58.021652   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:58.332748   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:58.454935   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:58.520423   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:58.523168   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:58.831811   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:58.958475   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:59.019134   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:59.025129   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:59.336018   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:59.455492   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:59.518376   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:59.522497   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:59.654153   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:59.835546   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:59.955284   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:00.018503   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:00.022328   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:00.332359   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:00.456069   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:00.521387   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:00.523989   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:00.653894   19497 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"True"
	I0422 16:59:00.653921   19497 pod_ready.go:81] duration metric: took 22.506610483s for pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace to be "Ready" ...
	I0422 16:59:00.653930   19497 pod_ready.go:38] duration metric: took 27.206413168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 16:59:00.653948   19497 api_server.go:52] waiting for apiserver process to appear ...
	I0422 16:59:00.654014   19497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 16:59:00.673151   19497 api_server.go:72] duration metric: took 31.054671465s to wait for apiserver process to appear ...
	I0422 16:59:00.673179   19497 api_server.go:88] waiting for apiserver healthz status ...
	I0422 16:59:00.673198   19497 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0422 16:59:00.678202   19497 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I0422 16:59:00.680182   19497 api_server.go:141] control plane version: v1.30.0
	I0422 16:59:00.680209   19497 api_server.go:131] duration metric: took 7.023803ms to wait for apiserver health ...
	I0422 16:59:00.680217   19497 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 16:59:00.695312   19497 system_pods.go:59] 18 kube-system pods found
	I0422 16:59:00.695343   19497 system_pods.go:61] "coredns-7db6d8ff4d-9kl4l" [46deec4f-c97e-48aa-b1ca-9c679e0a64e2] Running
	I0422 16:59:00.695356   19497 system_pods.go:61] "csi-hostpath-attacher-0" [d74d70fb-d561-4814-8fe7-4ff8c0a23bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0422 16:59:00.695362   19497 system_pods.go:61] "csi-hostpath-resizer-0" [9b290af7-b399-4289-82ab-afc3b871ed37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0422 16:59:00.695369   19497 system_pods.go:61] "csi-hostpathplugin-zjt6m" [31721d0b-bd0c-4744-bad2-98ec78059355] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0422 16:59:00.695375   19497 system_pods.go:61] "etcd-addons-934361" [c2ae446c-1bbb-455a-a0fb-f17ec9c211dd] Running
	I0422 16:59:00.695381   19497 system_pods.go:61] "kube-apiserver-addons-934361" [b19e33d4-127e-4da6-808f-32eb6d5a3d90] Running
	I0422 16:59:00.695386   19497 system_pods.go:61] "kube-controller-manager-addons-934361" [6163c15c-68c4-4c0a-93ec-970325ddd8ce] Running
	I0422 16:59:00.695392   19497 system_pods.go:61] "kube-ingress-dns-minikube" [0a75b318-14a2-4ad7-805f-363d1863bbdb] Running
	I0422 16:59:00.695399   19497 system_pods.go:61] "kube-proxy-zbd87" [b08b8c4d-9f59-4f64-8503-e5d055487f74] Running
	I0422 16:59:00.695408   19497 system_pods.go:61] "kube-scheduler-addons-934361" [961651f6-0a94-4bc5-883d-63e42ce76c03] Running
	I0422 16:59:00.695414   19497 system_pods.go:61] "metrics-server-c59844bb4-9rwbq" [be72f5e4-ae81-48d6-b57f-d9640e75904a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 16:59:00.695420   19497 system_pods.go:61] "nvidia-device-plugin-daemonset-ht2fz" [9f97974a-3d52-4db6-9187-920d1c7c72f3] Running
	I0422 16:59:00.695427   19497 system_pods.go:61] "registry-proxy-nzg6s" [033a658d-3f50-4962-ac56-dcf30ac650c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0422 16:59:00.695435   19497 system_pods.go:61] "registry-srp9r" [b6334572-9ae2-4f63-8d71-d5ec2df78324] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 16:59:00.695445   19497 system_pods.go:61] "snapshot-controller-745499f584-hlhfk" [b43966fb-693d-4dda-b93e-dcfdeb860226] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.695453   19497 system_pods.go:61] "snapshot-controller-745499f584-p498f" [87a1cbcf-3ba9-42bf-a292-39bc33617c0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.695460   19497 system_pods.go:61] "storage-provisioner" [eddb4fb4-7de5-44ef-9bac-3930ce87160c] Running
	I0422 16:59:00.695465   19497 system_pods.go:61] "tiller-deploy-6677d64bcd-fp7n8" [8ca5bebc-4067-46c4-b889-2eae5e85437d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0422 16:59:00.695475   19497 system_pods.go:74] duration metric: took 15.251388ms to wait for pod list to return data ...
	I0422 16:59:00.695489   19497 default_sa.go:34] waiting for default service account to be created ...
	I0422 16:59:00.698527   19497 default_sa.go:45] found service account: "default"
	I0422 16:59:00.698549   19497 default_sa.go:55] duration metric: took 3.049747ms for default service account to be created ...
	I0422 16:59:00.698557   19497 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 16:59:00.710668   19497 system_pods.go:86] 18 kube-system pods found
	I0422 16:59:00.710697   19497 system_pods.go:89] "coredns-7db6d8ff4d-9kl4l" [46deec4f-c97e-48aa-b1ca-9c679e0a64e2] Running
	I0422 16:59:00.710705   19497 system_pods.go:89] "csi-hostpath-attacher-0" [d74d70fb-d561-4814-8fe7-4ff8c0a23bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0422 16:59:00.710711   19497 system_pods.go:89] "csi-hostpath-resizer-0" [9b290af7-b399-4289-82ab-afc3b871ed37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0422 16:59:00.710720   19497 system_pods.go:89] "csi-hostpathplugin-zjt6m" [31721d0b-bd0c-4744-bad2-98ec78059355] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0422 16:59:00.710724   19497 system_pods.go:89] "etcd-addons-934361" [c2ae446c-1bbb-455a-a0fb-f17ec9c211dd] Running
	I0422 16:59:00.710729   19497 system_pods.go:89] "kube-apiserver-addons-934361" [b19e33d4-127e-4da6-808f-32eb6d5a3d90] Running
	I0422 16:59:00.710735   19497 system_pods.go:89] "kube-controller-manager-addons-934361" [6163c15c-68c4-4c0a-93ec-970325ddd8ce] Running
	I0422 16:59:00.710741   19497 system_pods.go:89] "kube-ingress-dns-minikube" [0a75b318-14a2-4ad7-805f-363d1863bbdb] Running
	I0422 16:59:00.710746   19497 system_pods.go:89] "kube-proxy-zbd87" [b08b8c4d-9f59-4f64-8503-e5d055487f74] Running
	I0422 16:59:00.710754   19497 system_pods.go:89] "kube-scheduler-addons-934361" [961651f6-0a94-4bc5-883d-63e42ce76c03] Running
	I0422 16:59:00.710764   19497 system_pods.go:89] "metrics-server-c59844bb4-9rwbq" [be72f5e4-ae81-48d6-b57f-d9640e75904a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 16:59:00.710780   19497 system_pods.go:89] "nvidia-device-plugin-daemonset-ht2fz" [9f97974a-3d52-4db6-9187-920d1c7c72f3] Running
	I0422 16:59:00.710789   19497 system_pods.go:89] "registry-proxy-nzg6s" [033a658d-3f50-4962-ac56-dcf30ac650c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0422 16:59:00.710799   19497 system_pods.go:89] "registry-srp9r" [b6334572-9ae2-4f63-8d71-d5ec2df78324] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 16:59:00.710807   19497 system_pods.go:89] "snapshot-controller-745499f584-hlhfk" [b43966fb-693d-4dda-b93e-dcfdeb860226] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.710816   19497 system_pods.go:89] "snapshot-controller-745499f584-p498f" [87a1cbcf-3ba9-42bf-a292-39bc33617c0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.710821   19497 system_pods.go:89] "storage-provisioner" [eddb4fb4-7de5-44ef-9bac-3930ce87160c] Running
	I0422 16:59:00.710828   19497 system_pods.go:89] "tiller-deploy-6677d64bcd-fp7n8" [8ca5bebc-4067-46c4-b889-2eae5e85437d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0422 16:59:00.710837   19497 system_pods.go:126] duration metric: took 12.275729ms to wait for k8s-apps to be running ...
	I0422 16:59:00.710849   19497 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 16:59:00.710905   19497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 16:59:00.727079   19497 system_svc.go:56] duration metric: took 16.221619ms WaitForService to wait for kubelet
	I0422 16:59:00.727111   19497 kubeadm.go:576] duration metric: took 31.108635829s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 16:59:00.727139   19497 node_conditions.go:102] verifying NodePressure condition ...
	I0422 16:59:00.730323   19497 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 16:59:00.730346   19497 node_conditions.go:123] node cpu capacity is 2
	I0422 16:59:00.730364   19497 node_conditions.go:105] duration metric: took 3.220639ms to run NodePressure ...
	I0422 16:59:00.730375   19497 start.go:240] waiting for startup goroutines ...
	I0422 16:59:00.832333   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:00.955365   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:01.018477   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:01.023113   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:01.332525   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:01.456461   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:01.519454   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:01.522465   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:01.832737   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:01.954518   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:02.019060   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:02.024010   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:02.332172   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:02.454958   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:02.519479   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:02.522643   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:02.832556   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:02.955331   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:03.018583   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:03.021650   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:03.352164   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:03.455870   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:03.518922   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:03.522617   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:03.836415   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:03.954931   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:04.018897   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:04.022915   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:04.331955   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:04.455108   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:04.518964   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:04.523760   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:04.832752   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:04.954672   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:05.018135   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:05.021706   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:05.332490   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:05.455527   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:05.519611   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:05.522007   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:05.832502   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:05.956074   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:06.019102   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:06.022704   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:06.333197   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:06.459316   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:06.520339   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:06.522470   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:06.832593   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:06.954773   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:07.018955   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:07.022725   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:07.334410   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:07.455636   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:07.519219   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:07.521461   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:07.832200   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:07.955158   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:08.019558   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:08.022464   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:08.332661   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:08.455805   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:08.519939   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:08.523282   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:08.832773   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:08.955149   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:09.019474   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:09.022836   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:09.335032   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:09.467034   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:09.523110   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:09.532325   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:09.833045   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:09.955145   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:10.020821   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:10.023999   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:10.443643   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:10.455860   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:10.519036   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:10.524123   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:10.837655   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:10.955349   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:11.019251   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:11.022427   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:11.333272   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:11.454776   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:11.518931   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:11.521744   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:11.834920   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:11.954901   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:12.019408   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:12.021769   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:12.332004   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:12.463261   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:12.519574   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:12.522601   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:12.833736   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:12.955148   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:13.019045   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:13.021817   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:13.333808   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:13.454975   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:13.524601   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:13.529288   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:13.832380   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:13.955608   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:14.023102   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:14.025272   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:14.336341   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:14.455159   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:14.519407   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:14.522395   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:14.831687   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:14.955172   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:15.019066   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:15.021903   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:15.335790   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:15.942968   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:15.943660   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:15.944051   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:15.944081   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:15.954884   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:16.018649   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:16.021430   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:16.332692   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:16.454201   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:16.519543   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:16.522306   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:16.832769   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:16.954370   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:17.024055   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:17.025825   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:17.332912   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:17.455168   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:17.519492   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:17.521811   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:17.831983   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:17.954627   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:18.018576   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:18.022898   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:18.333655   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:18.456877   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:18.519404   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:18.522065   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:18.835358   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:18.955448   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:19.019659   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:19.022604   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:19.331906   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:19.455041   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:19.519380   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:19.522159   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:19.834012   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:19.954922   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:20.021025   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:20.023034   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:20.333268   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:20.456596   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:20.521493   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:20.525592   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:20.832298   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:20.956730   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:21.018675   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:21.021811   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:21.331863   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:21.454327   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:21.518476   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:21.522840   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:21.842315   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:21.958638   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:22.018223   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:22.023322   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:22.335989   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:22.457733   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:22.518767   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:22.522392   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:22.832400   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:22.955377   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:23.020538   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:23.022586   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:23.331360   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:23.455465   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:23.519464   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:23.526589   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:23.832687   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:23.955399   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:24.018181   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:24.022394   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:24.333134   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:24.456556   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:24.640271   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:24.644793   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:24.833279   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:24.954730   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:25.020020   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:25.022327   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:25.334455   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:25.455363   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:25.518498   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:25.522276   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:25.833180   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:25.955332   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:26.019331   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:26.022381   19497 kapi.go:107] duration metric: took 47.004537115s to wait for kubernetes.io/minikube-addons=registry ...
	I0422 16:59:26.336297   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:26.455209   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:26.519099   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:26.832544   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:26.955182   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:27.019568   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:27.333360   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:27.454763   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:27.518965   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:27.832338   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:27.957013   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:28.019607   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:28.331402   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:28.455284   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:28.518688   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:28.833020   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:28.955935   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:29.019032   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:29.331711   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:29.454860   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:29.518849   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:29.832633   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:29.955809   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:30.018531   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:30.332727   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:30.455138   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:30.519688   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:30.833190   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:30.954628   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:31.018744   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:31.332365   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:31.454948   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:31.518692   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:31.830920   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:31.955196   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:32.019081   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:32.332030   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:32.454820   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:32.519258   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:32.832667   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:32.954293   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:33.019550   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:33.331525   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:33.455392   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:33.519561   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:33.832290   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:33.955282   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:34.020184   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:34.332619   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:34.455151   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:34.519517   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:34.832511   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:34.955941   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:35.019281   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:35.331777   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:35.454195   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:35.520806   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:35.833038   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:35.955174   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:36.019103   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:36.332648   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:36.455515   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:36.519544   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:36.835933   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:36.955439   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:37.018698   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:37.331551   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:37.455224   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:37.519523   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:37.832875   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:37.955777   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:38.019656   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:38.332159   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:38.455418   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:38.518392   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:38.838018   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:38.954540   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:39.018854   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:39.332139   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:39.454743   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:39.518668   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:39.831896   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:39.957884   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:40.018523   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:40.332949   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:40.454696   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:40.518605   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:40.850671   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:40.954194   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:41.018918   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:41.332240   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:41.455611   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:41.518979   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:41.832121   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:41.957297   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:42.018914   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:42.333712   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:42.455656   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:42.519107   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:42.836296   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:42.955383   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:43.019367   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:43.331257   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:43.454662   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:43.518675   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:43.832634   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:43.955410   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:44.025010   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:44.332594   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:44.455499   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:44.518502   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:44.833757   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:44.954682   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:45.019144   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:45.332120   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:45.455160   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:45.519114   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:45.832355   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:45.954569   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:46.020014   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:46.332265   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:46.456304   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:46.518698   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:46.831266   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:46.956170   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:47.019203   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:47.331886   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:47.454618   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:47.518415   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:47.832319   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:47.954993   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:48.020497   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:48.344170   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:48.454790   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:48.521902   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:48.837667   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:48.956315   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:49.018825   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:49.331206   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:49.455886   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:49.518726   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:49.831625   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:49.955430   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:50.018472   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:50.332230   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:50.455255   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:50.519549   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:50.833028   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:50.954359   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:51.018462   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:51.332314   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:51.455382   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:51.518510   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:51.832972   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:51.954863   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:52.018807   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:52.335063   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:52.454555   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:52.518980   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:52.831213   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:52.955095   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:53.019167   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:53.332215   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:53.454932   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:53.519290   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:53.835498   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:53.955646   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:54.018560   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:54.332934   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:54.454971   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:54.519114   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:54.834845   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:54.957216   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:55.019195   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:55.332013   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:55.454628   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:55.518675   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:55.832940   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:55.955051   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:56.018682   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:56.333290   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:56.455051   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:56.519438   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:56.832320   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:56.957714   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:57.028329   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:57.333234   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:57.454766   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:57.519145   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:57.832328   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:57.955455   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:58.018726   19497 kapi.go:107] duration metric: took 1m19.005798441s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0422 16:59:58.332486   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:58.455354   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:58.833423   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:58.954836   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:59.332687   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:59.455673   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:59.832467   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:59.955674   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 17:00:00.331758   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:00.455761   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 17:00:00.832703   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:00.954568   19497 kapi.go:107] duration metric: took 1m17.503631425s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0422 17:00:00.956644   19497 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-934361 cluster.
	I0422 17:00:00.958231   19497 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0422 17:00:00.959822   19497 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0422 17:00:01.332022   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:01.833334   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:02.332440   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:02.832672   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:03.333087   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:03.833008   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:04.333993   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:04.833559   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:05.333628   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:05.832645   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:06.332476   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:06.835867   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:07.337346   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:07.833043   19497 kapi.go:107] duration metric: took 1m27.0067329s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0422 17:00:07.835095   19497 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, helm-tiller, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0422 17:00:07.836547   19497 addons.go:505] duration metric: took 1m38.218045148s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner metrics-server yakd helm-tiller storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0422 17:00:07.836591   19497 start.go:245] waiting for cluster config update ...
	I0422 17:00:07.836625   19497 start.go:254] writing updated cluster config ...
	I0422 17:00:07.836871   19497 ssh_runner.go:195] Run: rm -f paused
	I0422 17:00:07.888626   19497 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 17:00:07.890795   19497 out.go:177] * Done! kubectl is now configured to use "addons-934361" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 17:02:55 addons-934361 crio[687]: time="2024-04-22 17:02:55.976191901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713805375976157412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b51c429c-e59a-46c7-8d57-4db804b64580 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:55 addons-934361 crio[687]: time="2024-04-22 17:02:55.976974905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7e20788-7656-4624-ab96-a050cf0fad52 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:55 addons-934361 crio[687]: time="2024-04-22 17:02:55.977078045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7e20788-7656-4624-ab96-a050cf0fad52 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:55 addons-934361 crio[687]: time="2024-04-22 17:02:55.977548829Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9df277752128a44f9e298d1a2296a2bce4750d3a1c337bf3938708926a65cb56,PodSandboxId:2c3dc4771468f9728a01b77e14ad419ea8b99f83f75c25c4b3b7f4f2d3bb47f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713805369539163809,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-zdkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f7d61ea-53c0-4922-9b21-e8daf0c21bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 2657a6f6,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9926944e9d15780564b120bb3d82fdc6172add917ca19b3a512b5e95405bdc5d,PodSandboxId:bb751305424892411c6ebd876b38752fd70b8c5c74a8b14c493973ed31688055,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713805318561774948,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-jx57l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 05d7c185-fcb3-4db1-941f-58c4cf86a75f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 4680ca88,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f302e7c44f2215a4eb4dc05c17696313ac15c01877ceeb4d34d0b451097e36ae,PodSandboxId:9b6c83eadeecd540eb3d823a28df9c9191d19e1349bb76d7ee640f5ace8fd487,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713805229544862244,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 54f74d8d-0de6-4905-8880-cdc716c944b3,},Annotations:map[string]string{io.kubernetes.container.hash: 7742d2c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac,PodSandboxId:e7dc89ac1feef0671d5d95fff08c7b88cf4313ffb379e61507b3fc92b599af4c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713805200007867342,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-hb6nw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 616d4a54-5dfb-45cc-9b0a-a2461bbdc3e8,},Annotations:map[string]string{io.kubernetes.container.hash: ff242c65,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f53d8807307fb58fd40a6301d224dba6301156f19ff29d351a9adba4825e3d,PodSandboxId:d315380f6f0c80ead3fb4bf151d25e08896780af9efb8b3da9417b402bc724d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_E
XITED,CreatedAt:1713805196981624080,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-84df5799c-kk8bl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f4a856af-bd62-4050-b05b-81914b90e27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f29c00c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6962942a6d92d4b0e2d100987670d7b3ddc655dbd50d9ba62da86116de79867,PodSandboxId:12961f9b2c6744253f73ce127ce9f7108f
0f22f81da57ad2717217a56a90bdbf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181146930288,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jcxb8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 86ce0533-4864-45ac-859b-65b1bc8630ea,},Annotations:map[string]string{io.kubernetes.container.hash: 8ab4dd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4057c59bb2224b137638b04e782f5f48d6ae9ae4fc2a1fc61b6e95b6bd1a3,PodSandboxId
:6a92cfec1da9afbddab2e321e92123eaa892d1056b567e0abe5806c2b5d0dcbc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181021339644,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ffzh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2b42d4d2-2f3e-4bf8-af3d-eec264315cdc,},Annotations:map[string]string{io.kubernetes.container.hash: dfa3b370,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992ddaec1770dd27b6fcf280f55f128597b66f
ded6028ed19bd06581c6a7af4,PodSandboxId:f4386e68702ef1455c642103ff9814ba90f77b0f7ef5e4021e8659af4c45530f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713805176582586683,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-dqx5m,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9,},Annotations:map[string]string{io.kubernetes.container.hash: ecfe4f92,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a,PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713805167902556409,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-9rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be72f5e4-ae81-48d6-b57f-d9640e75904a,},Annotations:map[string]string{io.kubernetes.container.hash: dde1c527,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0,PodSandboxId:9a83f28ec25a17677c9a3dbe8eaf24730d2f8678d9e0d72053a42122b232eb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805116439144899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddb4fb4-7de5-44ef-9bac-3930ce87160c,},Annotations:map[string]string{io.kubernetes.container.hash: a017758,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839,PodSandboxId:c16f432ead2fcea0e9e3dc709ce1f66044a91451e425e92bb0e3b50e8b8fd5d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805114646949234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9kl4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46deec4f-c97e-48aa-b1ca-9c679e0a64e2,},Annotations:map[string]string{io.kubernetes.container.hash: 85712065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":5
3,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160,PodSandboxId:6a5e4353dde8f7e0fadf7a0693d9c624a94121396d4daa820ffb5ed996ef7e32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805111283702745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbd87,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: b08b8c4d-9f59-4f64-8503-e5d055487f74,},Annotations:map[string]string{io.kubernetes.container.hash: 4d716e8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d,PodSandboxId:aa58a6a1d4ea9b4ae0775732d9149c31b5d9f97c57149b082c6a5ba21fd7d06a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805090322229442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-934361,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: d903f02e20fa6303480e5550d5ff53c6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09,PodSandboxId:ed579bd1a44698e8dbb907cd5e2b51bbcc0b49cf0b9581ea5a0502d71b9a3462,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713805090294253425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4a435bd17231af3fb00e256e0ac8b418,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e,PodSandboxId:9c1684f3b30a58043ea007f2cf40ec39dc7abe196b588d110a0be598ded10ee9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713805090237410245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70
b88177cb739840d9af5145b71cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 48ba8371,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0,PodSandboxId:37d43ed8797170034a3fe41c1f1b7b1de3a9600a846a92f67d2a7cfd4d831e11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713805090204249501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c66e348c58730d7efb8ebd6834f7506,},Annotations:map[
string]string{io.kubernetes.container.hash: 68bbc046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7e20788-7656-4624-ab96-a050cf0fad52 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.022440861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d833096-eef2-4041-9860-93dceb957ab7 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.022541614Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d833096-eef2-4041-9860-93dceb957ab7 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.024760794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0334e5ff-36ca-4bfd-93f9-a48b62cd40a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.026755427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713805376026724906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0334e5ff-36ca-4bfd-93f9-a48b62cd40a6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.027594916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a00e5890-b70f-45a3-8d00-f48675efcc64 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.027674473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a00e5890-b70f-45a3-8d00-f48675efcc64 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.028169848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9df277752128a44f9e298d1a2296a2bce4750d3a1c337bf3938708926a65cb56,PodSandboxId:2c3dc4771468f9728a01b77e14ad419ea8b99f83f75c25c4b3b7f4f2d3bb47f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713805369539163809,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-zdkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f7d61ea-53c0-4922-9b21-e8daf0c21bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 2657a6f6,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9926944e9d15780564b120bb3d82fdc6172add917ca19b3a512b5e95405bdc5d,PodSandboxId:bb751305424892411c6ebd876b38752fd70b8c5c74a8b14c493973ed31688055,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713805318561774948,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-jx57l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 05d7c185-fcb3-4db1-941f-58c4cf86a75f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 4680ca88,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f302e7c44f2215a4eb4dc05c17696313ac15c01877ceeb4d34d0b451097e36ae,PodSandboxId:9b6c83eadeecd540eb3d823a28df9c9191d19e1349bb76d7ee640f5ace8fd487,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713805229544862244,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 54f74d8d-0de6-4905-8880-cdc716c944b3,},Annotations:map[string]string{io.kubernetes.container.hash: 7742d2c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac,PodSandboxId:e7dc89ac1feef0671d5d95fff08c7b88cf4313ffb379e61507b3fc92b599af4c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713805200007867342,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-hb6nw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 616d4a54-5dfb-45cc-9b0a-a2461bbdc3e8,},Annotations:map[string]string{io.kubernetes.container.hash: ff242c65,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f53d8807307fb58fd40a6301d224dba6301156f19ff29d351a9adba4825e3d,PodSandboxId:d315380f6f0c80ead3fb4bf151d25e08896780af9efb8b3da9417b402bc724d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_E
XITED,CreatedAt:1713805196981624080,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-84df5799c-kk8bl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f4a856af-bd62-4050-b05b-81914b90e27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f29c00c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6962942a6d92d4b0e2d100987670d7b3ddc655dbd50d9ba62da86116de79867,PodSandboxId:12961f9b2c6744253f73ce127ce9f7108f
0f22f81da57ad2717217a56a90bdbf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181146930288,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jcxb8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 86ce0533-4864-45ac-859b-65b1bc8630ea,},Annotations:map[string]string{io.kubernetes.container.hash: 8ab4dd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4057c59bb2224b137638b04e782f5f48d6ae9ae4fc2a1fc61b6e95b6bd1a3,PodSandboxId
:6a92cfec1da9afbddab2e321e92123eaa892d1056b567e0abe5806c2b5d0dcbc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181021339644,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ffzh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2b42d4d2-2f3e-4bf8-af3d-eec264315cdc,},Annotations:map[string]string{io.kubernetes.container.hash: dfa3b370,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992ddaec1770dd27b6fcf280f55f128597b66f
ded6028ed19bd06581c6a7af4,PodSandboxId:f4386e68702ef1455c642103ff9814ba90f77b0f7ef5e4021e8659af4c45530f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713805176582586683,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-dqx5m,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9,},Annotations:map[string]string{io.kubernetes.container.hash: ecfe4f92,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a,PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713805167902556409,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-9rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be72f5e4-ae81-48d6-b57f-d9640e75904a,},Annotations:map[string]string{io.kubernetes.container.hash: dde1c527,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0,PodSandboxId:9a83f28ec25a17677c9a3dbe8eaf24730d2f8678d9e0d72053a42122b232eb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805116439144899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddb4fb4-7de5-44ef-9bac-3930ce87160c,},Annotations:map[string]string{io.kubernetes.container.hash: a017758,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839,PodSandboxId:c16f432ead2fcea0e9e3dc709ce1f66044a91451e425e92bb0e3b50e8b8fd5d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805114646949234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9kl4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46deec4f-c97e-48aa-b1ca-9c679e0a64e2,},Annotations:map[string]string{io.kubernetes.container.hash: 85712065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":5
3,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160,PodSandboxId:6a5e4353dde8f7e0fadf7a0693d9c624a94121396d4daa820ffb5ed996ef7e32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805111283702745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbd87,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: b08b8c4d-9f59-4f64-8503-e5d055487f74,},Annotations:map[string]string{io.kubernetes.container.hash: 4d716e8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d,PodSandboxId:aa58a6a1d4ea9b4ae0775732d9149c31b5d9f97c57149b082c6a5ba21fd7d06a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805090322229442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-934361,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: d903f02e20fa6303480e5550d5ff53c6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09,PodSandboxId:ed579bd1a44698e8dbb907cd5e2b51bbcc0b49cf0b9581ea5a0502d71b9a3462,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713805090294253425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4a435bd17231af3fb00e256e0ac8b418,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e,PodSandboxId:9c1684f3b30a58043ea007f2cf40ec39dc7abe196b588d110a0be598ded10ee9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713805090237410245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70
b88177cb739840d9af5145b71cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 48ba8371,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0,PodSandboxId:37d43ed8797170034a3fe41c1f1b7b1de3a9600a846a92f67d2a7cfd4d831e11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713805090204249501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c66e348c58730d7efb8ebd6834f7506,},Annotations:map[
string]string{io.kubernetes.container.hash: 68bbc046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a00e5890-b70f-45a3-8d00-f48675efcc64 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.067073212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e036fd5-269a-48f6-8407-c9849dabda8f name=/runtime.v1.RuntimeService/Version
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.067186065Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e036fd5-269a-48f6-8407-c9849dabda8f name=/runtime.v1.RuntimeService/Version
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.068926655Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e3664ea-3d3e-4718-b571-9d1b774441ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.070539959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713805376070506159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e3664ea-3d3e-4718-b571-9d1b774441ee name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.071261920Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1c69881-5af2-497a-8339-c97ee5e6d4ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.071320327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1c69881-5af2-497a-8339-c97ee5e6d4ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.071701854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9df277752128a44f9e298d1a2296a2bce4750d3a1c337bf3938708926a65cb56,PodSandboxId:2c3dc4771468f9728a01b77e14ad419ea8b99f83f75c25c4b3b7f4f2d3bb47f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713805369539163809,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-zdkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f7d61ea-53c0-4922-9b21-e8daf0c21bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 2657a6f6,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9926944e9d15780564b120bb3d82fdc6172add917ca19b3a512b5e95405bdc5d,PodSandboxId:bb751305424892411c6ebd876b38752fd70b8c5c74a8b14c493973ed31688055,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713805318561774948,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-jx57l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 05d7c185-fcb3-4db1-941f-58c4cf86a75f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 4680ca88,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f302e7c44f2215a4eb4dc05c17696313ac15c01877ceeb4d34d0b451097e36ae,PodSandboxId:9b6c83eadeecd540eb3d823a28df9c9191d19e1349bb76d7ee640f5ace8fd487,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713805229544862244,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 54f74d8d-0de6-4905-8880-cdc716c944b3,},Annotations:map[string]string{io.kubernetes.container.hash: 7742d2c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac,PodSandboxId:e7dc89ac1feef0671d5d95fff08c7b88cf4313ffb379e61507b3fc92b599af4c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713805200007867342,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-hb6nw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 616d4a54-5dfb-45cc-9b0a-a2461bbdc3e8,},Annotations:map[string]string{io.kubernetes.container.hash: ff242c65,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f53d8807307fb58fd40a6301d224dba6301156f19ff29d351a9adba4825e3d,PodSandboxId:d315380f6f0c80ead3fb4bf151d25e08896780af9efb8b3da9417b402bc724d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_E
XITED,CreatedAt:1713805196981624080,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-84df5799c-kk8bl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f4a856af-bd62-4050-b05b-81914b90e27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f29c00c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6962942a6d92d4b0e2d100987670d7b3ddc655dbd50d9ba62da86116de79867,PodSandboxId:12961f9b2c6744253f73ce127ce9f7108f
0f22f81da57ad2717217a56a90bdbf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181146930288,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jcxb8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 86ce0533-4864-45ac-859b-65b1bc8630ea,},Annotations:map[string]string{io.kubernetes.container.hash: 8ab4dd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4057c59bb2224b137638b04e782f5f48d6ae9ae4fc2a1fc61b6e95b6bd1a3,PodSandboxId
:6a92cfec1da9afbddab2e321e92123eaa892d1056b567e0abe5806c2b5d0dcbc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181021339644,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ffzh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2b42d4d2-2f3e-4bf8-af3d-eec264315cdc,},Annotations:map[string]string{io.kubernetes.container.hash: dfa3b370,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992ddaec1770dd27b6fcf280f55f128597b66f
ded6028ed19bd06581c6a7af4,PodSandboxId:f4386e68702ef1455c642103ff9814ba90f77b0f7ef5e4021e8659af4c45530f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713805176582586683,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-dqx5m,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9,},Annotations:map[string]string{io.kubernetes.container.hash: ecfe4f92,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a,PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713805167902556409,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-9rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be72f5e4-ae81-48d6-b57f-d9640e75904a,},Annotations:map[string]string{io.kubernetes.container.hash: dde1c527,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0,PodSandboxId:9a83f28ec25a17677c9a3dbe8eaf24730d2f8678d9e0d72053a42122b232eb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805116439144899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddb4fb4-7de5-44ef-9bac-3930ce87160c,},Annotations:map[string]string{io.kubernetes.container.hash: a017758,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839,PodSandboxId:c16f432ead2fcea0e9e3dc709ce1f66044a91451e425e92bb0e3b50e8b8fd5d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805114646949234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9kl4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46deec4f-c97e-48aa-b1ca-9c679e0a64e2,},Annotations:map[string]string{io.kubernetes.container.hash: 85712065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":5
3,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160,PodSandboxId:6a5e4353dde8f7e0fadf7a0693d9c624a94121396d4daa820ffb5ed996ef7e32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805111283702745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbd87,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: b08b8c4d-9f59-4f64-8503-e5d055487f74,},Annotations:map[string]string{io.kubernetes.container.hash: 4d716e8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d,PodSandboxId:aa58a6a1d4ea9b4ae0775732d9149c31b5d9f97c57149b082c6a5ba21fd7d06a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805090322229442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-934361,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: d903f02e20fa6303480e5550d5ff53c6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09,PodSandboxId:ed579bd1a44698e8dbb907cd5e2b51bbcc0b49cf0b9581ea5a0502d71b9a3462,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713805090294253425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4a435bd17231af3fb00e256e0ac8b418,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e,PodSandboxId:9c1684f3b30a58043ea007f2cf40ec39dc7abe196b588d110a0be598ded10ee9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713805090237410245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70
b88177cb739840d9af5145b71cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 48ba8371,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0,PodSandboxId:37d43ed8797170034a3fe41c1f1b7b1de3a9600a846a92f67d2a7cfd4d831e11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713805090204249501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c66e348c58730d7efb8ebd6834f7506,},Annotations:map[
string]string{io.kubernetes.container.hash: 68bbc046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1c69881-5af2-497a-8339-c97ee5e6d4ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.114978867Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8440468-e55e-46f1-bd9f-3ce5f55cb431 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.115227250Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8440468-e55e-46f1-bd9f-3ce5f55cb431 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.121624145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0a80baa0-e5b3-43c9-a5f5-c0e893960943 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.122868196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713805376122838202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0a80baa0-e5b3-43c9-a5f5-c0e893960943 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.123864366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d944338-4056-4e85-b298-af85a8ecf543 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.123931454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d944338-4056-4e85-b298-af85a8ecf543 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:02:56 addons-934361 crio[687]: time="2024-04-22 17:02:56.124617480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9df277752128a44f9e298d1a2296a2bce4750d3a1c337bf3938708926a65cb56,PodSandboxId:2c3dc4771468f9728a01b77e14ad419ea8b99f83f75c25c4b3b7f4f2d3bb47f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713805369539163809,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-zdkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f7d61ea-53c0-4922-9b21-e8daf0c21bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 2657a6f6,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9926944e9d15780564b120bb3d82fdc6172add917ca19b3a512b5e95405bdc5d,PodSandboxId:bb751305424892411c6ebd876b38752fd70b8c5c74a8b14c493973ed31688055,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713805318561774948,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-jx57l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 05d7c185-fcb3-4db1-941f-58c4cf86a75f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 4680ca88,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f302e7c44f2215a4eb4dc05c17696313ac15c01877ceeb4d34d0b451097e36ae,PodSandboxId:9b6c83eadeecd540eb3d823a28df9c9191d19e1349bb76d7ee640f5ace8fd487,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713805229544862244,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 54f74d8d-0de6-4905-8880-cdc716c944b3,},Annotations:map[string]string{io.kubernetes.container.hash: 7742d2c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac,PodSandboxId:e7dc89ac1feef0671d5d95fff08c7b88cf4313ffb379e61507b3fc92b599af4c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713805200007867342,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-hb6nw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 616d4a54-5dfb-45cc-9b0a-a2461bbdc3e8,},Annotations:map[string]string{io.kubernetes.container.hash: ff242c65,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f53d8807307fb58fd40a6301d224dba6301156f19ff29d351a9adba4825e3d,PodSandboxId:d315380f6f0c80ead3fb4bf151d25e08896780af9efb8b3da9417b402bc724d9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_E
XITED,CreatedAt:1713805196981624080,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-84df5799c-kk8bl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f4a856af-bd62-4050-b05b-81914b90e27e,},Annotations:map[string]string{io.kubernetes.container.hash: 9f29c00c,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a6962942a6d92d4b0e2d100987670d7b3ddc655dbd50d9ba62da86116de79867,PodSandboxId:12961f9b2c6744253f73ce127ce9f7108f
0f22f81da57ad2717217a56a90bdbf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181146930288,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-jcxb8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 86ce0533-4864-45ac-859b-65b1bc8630ea,},Annotations:map[string]string{io.kubernetes.container.hash: 8ab4dd03,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d4057c59bb2224b137638b04e782f5f48d6ae9ae4fc2a1fc61b6e95b6bd1a3,PodSandboxId
:6a92cfec1da9afbddab2e321e92123eaa892d1056b567e0abe5806c2b5d0dcbc,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713805181021339644,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6ffzh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2b42d4d2-2f3e-4bf8-af3d-eec264315cdc,},Annotations:map[string]string{io.kubernetes.container.hash: dfa3b370,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992ddaec1770dd27b6fcf280f55f128597b66f
ded6028ed19bd06581c6a7af4,PodSandboxId:f4386e68702ef1455c642103ff9814ba90f77b0f7ef5e4021e8659af4c45530f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713805176582586683,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-dqx5m,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9,},Annotations:map[string]string{io.kubernetes.container.hash: ecfe4f92,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,i
o.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a,PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713805167902556409,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-9rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be72f5e4-ae81-48d6-b57f-d9640e75904a,},Annotations:map[string]string{io.kubernetes.container.hash: dde1c527,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0,PodSandboxId:9a83f28ec25a17677c9a3dbe8eaf24730d2f8678d9e0d72053a42122b232eb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805116439144899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddb4fb4-7de5-44ef-9bac-3930ce87160c,},Annotations:map[string]string{io.kubernetes.container.hash: a017758,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839,PodSandboxId:c16f432ead2fcea0e9e3dc709ce1f66044a91451e425e92bb0e3b50e8b8fd5d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805114646949234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9kl4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46deec4f-c97e-48aa-b1ca-9c679e0a64e2,},Annotations:map[string]string{io.kubernetes.container.hash: 85712065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":5
3,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160,PodSandboxId:6a5e4353dde8f7e0fadf7a0693d9c624a94121396d4daa820ffb5ed996ef7e32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805111283702745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbd87,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: b08b8c4d-9f59-4f64-8503-e5d055487f74,},Annotations:map[string]string{io.kubernetes.container.hash: 4d716e8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d,PodSandboxId:aa58a6a1d4ea9b4ae0775732d9149c31b5d9f97c57149b082c6a5ba21fd7d06a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805090322229442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-934361,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: d903f02e20fa6303480e5550d5ff53c6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09,PodSandboxId:ed579bd1a44698e8dbb907cd5e2b51bbcc0b49cf0b9581ea5a0502d71b9a3462,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713805090294253425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 4a435bd17231af3fb00e256e0ac8b418,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e,PodSandboxId:9c1684f3b30a58043ea007f2cf40ec39dc7abe196b588d110a0be598ded10ee9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713805090237410245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70
b88177cb739840d9af5145b71cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 48ba8371,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0,PodSandboxId:37d43ed8797170034a3fe41c1f1b7b1de3a9600a846a92f67d2a7cfd4d831e11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713805090204249501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c66e348c58730d7efb8ebd6834f7506,},Annotations:map[
string]string{io.kubernetes.container.hash: 68bbc046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d944338-4056-4e85-b298-af85a8ecf543 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9df277752128a       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      6 seconds ago       Running             hello-world-app           0                   2c3dc4771468f       hello-world-app-86c47465fc-zdkzg
	9926944e9d157       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        57 seconds ago      Running             headlamp                  0                   bb75130542489       headlamp-7559bf459f-jx57l
	f302e7c44f221       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                              2 minutes ago       Running             nginx                     0                   9b6c83eadeecd       nginx
	b1a1ac745bcf0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 2 minutes ago       Running             gcp-auth                  0                   e7dc89ac1feef       gcp-auth-5db96cd9b4-hb6nw
	71f53d8807307       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c             2 minutes ago       Exited              controller                0                   d315380f6f0c8       ingress-nginx-controller-84df5799c-kk8bl
	a6962942a6d92       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   12961f9b2c674       ingress-nginx-admission-patch-jcxb8
	82d4057c59bb2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   6a92cfec1da9a       ingress-nginx-admission-create-6ffzh
	b992ddaec1770       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   f4386e68702ef       yakd-dashboard-5ddbf7d777-dqx5m
	0b79999aaa757       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   953e3f9469192       metrics-server-c59844bb4-9rwbq
	b30c9c1cea934       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   9a83f28ec25a1       storage-provisioner
	fb7076488504b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   c16f432ead2fc       coredns-7db6d8ff4d-9kl4l
	2bee9d70cd4bd       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                             4 minutes ago       Running             kube-proxy                0                   6a5e4353dde8f       kube-proxy-zbd87
	63ef320e6c20d       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                             4 minutes ago       Running             kube-controller-manager   0                   aa58a6a1d4ea9       kube-controller-manager-addons-934361
	6bfe236d105ed       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                             4 minutes ago       Running             kube-scheduler            0                   ed579bd1a4469       kube-scheduler-addons-934361
	f8c3dd43dc9c6       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                             4 minutes ago       Running             kube-apiserver            0                   9c1684f3b30a5       kube-apiserver-addons-934361
	0cef8fab46a0f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   37d43ed879717       etcd-addons-934361
	
	
	==> coredns [fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839] <==
	[INFO] 10.244.0.7:49911 - 48847 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00070284s
	[INFO] 10.244.0.7:56817 - 9106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000144253s
	[INFO] 10.244.0.7:56817 - 52126 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071517s
	[INFO] 10.244.0.7:37733 - 62594 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069402s
	[INFO] 10.244.0.7:37733 - 52100 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133197s
	[INFO] 10.244.0.7:46105 - 5339 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088529s
	[INFO] 10.244.0.7:46105 - 5337 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134415s
	[INFO] 10.244.0.7:37067 - 34336 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000184296s
	[INFO] 10.244.0.7:37067 - 52783 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000273801s
	[INFO] 10.244.0.7:60120 - 63077 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068555s
	[INFO] 10.244.0.7:60120 - 54371 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032454s
	[INFO] 10.244.0.7:58709 - 62195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032319s
	[INFO] 10.244.0.7:58709 - 47601 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059034s
	[INFO] 10.244.0.7:36499 - 35445 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063831s
	[INFO] 10.244.0.7:36499 - 62347 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000026885s
	[INFO] 10.244.0.22:47401 - 13695 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000398656s
	[INFO] 10.244.0.22:60736 - 34009 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000102179s
	[INFO] 10.244.0.22:39459 - 54337 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098307s
	[INFO] 10.244.0.22:33691 - 1173 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000241831s
	[INFO] 10.244.0.22:48377 - 24442 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112591s
	[INFO] 10.244.0.22:52282 - 23413 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00005398s
	[INFO] 10.244.0.22:53445 - 36897 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000639433s
	[INFO] 10.244.0.22:50030 - 18100 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001006445s
	[INFO] 10.244.0.23:36304 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002736466s
	[INFO] 10.244.0.23:37709 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000237898s
	
	
	==> describe nodes <==
	Name:               addons-934361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-934361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=addons-934361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T16_58_16_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-934361
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 16:58:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-934361
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:02:21 +0000   Mon, 22 Apr 2024 16:58:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:02:21 +0000   Mon, 22 Apr 2024 16:58:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:02:21 +0000   Mon, 22 Apr 2024 16:58:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:02:21 +0000   Mon, 22 Apr 2024 16:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    addons-934361
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2706d446b56941c5901b11db32cb61d2
	  System UUID:                2706d446-b569-41c5-901b-11db32cb61d2
	  Boot ID:                    6bc823c6-7e50-4cce-bb78-65a464b0a746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-zdkzg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-5db96cd9b4-hb6nw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  headlamp                    headlamp-7559bf459f-jx57l                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 coredns-7db6d8ff4d-9kl4l                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m27s
	  kube-system                 etcd-addons-934361                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-apiserver-addons-934361             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-addons-934361    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-proxy-zbd87                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-addons-934361             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 metrics-server-c59844bb4-9rwbq           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-dqx5m          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m24s  kube-proxy       
	  Normal  Starting                 4m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s  kubelet          Node addons-934361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet          Node addons-934361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet          Node addons-934361 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-934361 status is now: NodeReady
	  Normal  RegisteredNode           4m28s  node-controller  Node addons-934361 event: Registered Node addons-934361 in Controller
	
	
	==> dmesg <==
	[  +5.338820] kauditd_printk_skb: 106 callbacks suppressed
	[ +14.011100] kauditd_printk_skb: 5 callbacks suppressed
	[Apr22 16:59] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.123053] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.483322] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.061613] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.080444] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.606015] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.518867] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 17:00] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.332821] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.738701] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.258583] kauditd_printk_skb: 32 callbacks suppressed
	[ +18.635768] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.501461] kauditd_printk_skb: 15 callbacks suppressed
	[Apr22 17:01] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.889614] kauditd_printk_skb: 2 callbacks suppressed
	[ +15.915807] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.870628] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.501543] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.600825] kauditd_printk_skb: 24 callbacks suppressed
	[Apr22 17:02] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.117678] kauditd_printk_skb: 7 callbacks suppressed
	[ +40.135246] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.172720] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0] <==
	{"level":"info","ts":"2024-04-22T16:59:24.630749Z","caller":"traceutil/trace.go:171","msg":"trace[1361596995] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:939; }","duration":"119.03451ms","start":"2024-04-22T16:59:24.511703Z","end":"2024-04-22T16:59:24.630737Z","steps":["trace[1361596995] 'agreement among raft nodes before linearized reading'  (duration: 118.801891ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T16:59:24.630924Z","caller":"traceutil/trace.go:171","msg":"trace[1125529670] transaction","detail":"{read_only:false; response_revision:939; number_of_response:1; }","duration":"175.076823ms","start":"2024-04-22T16:59:24.45584Z","end":"2024-04-22T16:59:24.630917Z","steps":["trace[1125529670] 'process raft request'  (duration: 174.379144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T16:59:24.631539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.705606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85134"}
	{"level":"info","ts":"2024-04-22T16:59:24.631593Z","caller":"traceutil/trace.go:171","msg":"trace[1891859423] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:939; }","duration":"115.782682ms","start":"2024-04-22T16:59:24.515802Z","end":"2024-04-22T16:59:24.631585Z","steps":["trace[1891859423] 'agreement among raft nodes before linearized reading'  (duration: 115.583567ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T16:59:46.664576Z","caller":"traceutil/trace.go:171","msg":"trace[1071014651] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"108.091796ms","start":"2024-04-22T16:59:46.556378Z","end":"2024-04-22T16:59:46.66447Z","steps":["trace[1071014651] 'process raft request'  (duration: 106.351844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T16:59:56.79784Z","caller":"traceutil/trace.go:171","msg":"trace[36495517] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"245.259788ms","start":"2024-04-22T16:59:56.552564Z","end":"2024-04-22T16:59:56.797824Z","steps":["trace[36495517] 'process raft request'  (duration: 244.931014ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:00:06.288347Z","caller":"traceutil/trace.go:171","msg":"trace[25965756] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"285.750524ms","start":"2024-04-22T17:00:06.00258Z","end":"2024-04-22T17:00:06.28833Z","steps":["trace[25965756] 'process raft request'  (duration: 285.599339ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:17.677774Z","caller":"traceutil/trace.go:171","msg":"trace[337112495] linearizableReadLoop","detail":"{readStateIndex:1542; appliedIndex:1541; }","duration":"169.98594ms","start":"2024-04-22T17:01:17.507747Z","end":"2024-04-22T17:01:17.677733Z","steps":["trace[337112495] 'read index received'  (duration: 169.862954ms)","trace[337112495] 'applied index is now lower than readState.Index'  (duration: 122.556µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:01:17.678158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.340225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-04-22T17:01:17.678232Z","caller":"traceutil/trace.go:171","msg":"trace[1168414504] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1479; }","duration":"170.500449ms","start":"2024-04-22T17:01:17.507722Z","end":"2024-04-22T17:01:17.678222Z","steps":["trace[1168414504] 'agreement among raft nodes before linearized reading'  (duration: 170.225909ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:17.678462Z","caller":"traceutil/trace.go:171","msg":"trace[1112686816] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"198.382208ms","start":"2024-04-22T17:01:17.480072Z","end":"2024-04-22T17:01:17.678455Z","steps":["trace[1112686816] 'process raft request'  (duration: 197.581777ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:23.851235Z","caller":"traceutil/trace.go:171","msg":"trace[1269274059] transaction","detail":"{read_only:false; response_revision:1492; number_of_response:1; }","duration":"143.049819ms","start":"2024-04-22T17:01:23.708167Z","end":"2024-04-22T17:01:23.851217Z","steps":["trace[1269274059] 'process raft request'  (duration: 142.948726ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:48.979427Z","caller":"traceutil/trace.go:171","msg":"trace[694671986] linearizableReadLoop","detail":"{readStateIndex:1743; appliedIndex:1742; }","duration":"164.955934ms","start":"2024-04-22T17:01:48.814458Z","end":"2024-04-22T17:01:48.979414Z","steps":["trace[694671986] 'read index received'  (duration: 164.757811ms)","trace[694671986] 'applied index is now lower than readState.Index'  (duration: 197.701µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:01:48.979543Z","caller":"traceutil/trace.go:171","msg":"trace[1874466795] transaction","detail":"{read_only:false; response_revision:1670; number_of_response:1; }","duration":"171.99275ms","start":"2024-04-22T17:01:48.807539Z","end":"2024-04-22T17:01:48.979532Z","steps":["trace[1874466795] 'process raft request'  (duration: 171.749955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:01:48.979923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.451886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6125"}
	{"level":"info","ts":"2024-04-22T17:01:48.980076Z","caller":"traceutil/trace.go:171","msg":"trace[1194715953] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1670; }","duration":"165.632526ms","start":"2024-04-22T17:01:48.814435Z","end":"2024-04-22T17:01:48.980067Z","steps":["trace[1194715953] 'agreement among raft nodes before linearized reading'  (duration: 165.406916ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:58.460354Z","caller":"traceutil/trace.go:171","msg":"trace[913897160] linearizableReadLoop","detail":"{readStateIndex:1833; appliedIndex:1832; }","duration":"166.390696ms","start":"2024-04-22T17:01:58.29395Z","end":"2024-04-22T17:01:58.460341Z","steps":["trace[913897160] 'read index received'  (duration: 166.242883ms)","trace[913897160] 'applied index is now lower than readState.Index'  (duration: 147.392µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:01:58.460456Z","caller":"traceutil/trace.go:171","msg":"trace[544735588] transaction","detail":"{read_only:false; response_revision:1757; number_of_response:1; }","duration":"430.169114ms","start":"2024-04-22T17:01:58.03028Z","end":"2024-04-22T17:01:58.460449Z","steps":["trace[544735588] 'process raft request'  (duration: 429.952907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:01:58.460571Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T17:01:58.030267Z","time spent":"430.21481ms","remote":"127.0.0.1:35118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1755 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-22T17:01:58.460851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.894158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T17:01:58.460876Z","caller":"traceutil/trace.go:171","msg":"trace[480731048] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1757; }","duration":"166.943114ms","start":"2024-04-22T17:01:58.293926Z","end":"2024-04-22T17:01:58.460869Z","steps":["trace[480731048] 'agreement among raft nodes before linearized reading'  (duration: 166.898748ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:02:34.792857Z","caller":"traceutil/trace.go:171","msg":"trace[967922381] linearizableReadLoop","detail":"{readStateIndex:1913; appliedIndex:1912; }","duration":"139.296945ms","start":"2024-04-22T17:02:34.653518Z","end":"2024-04-22T17:02:34.792815Z","steps":["trace[967922381] 'read index received'  (duration: 139.066603ms)","trace[967922381] 'applied index is now lower than readState.Index'  (duration: 229.413µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:02:34.793165Z","caller":"traceutil/trace.go:171","msg":"trace[169896204] transaction","detail":"{read_only:false; response_revision:1829; number_of_response:1; }","duration":"143.003767ms","start":"2024-04-22T17:02:34.650139Z","end":"2024-04-22T17:02:34.793143Z","steps":["trace[169896204] 'process raft request'  (duration: 142.489075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:02:34.793377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.78288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.135\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-22T17:02:34.793448Z","caller":"traceutil/trace.go:171","msg":"trace[35807185] range","detail":"{range_begin:/registry/masterleases/192.168.39.135; range_end:; response_count:1; response_revision:1829; }","duration":"139.9362ms","start":"2024-04-22T17:02:34.653496Z","end":"2024-04-22T17:02:34.793432Z","steps":["trace[35807185] 'agreement among raft nodes before linearized reading'  (duration: 139.631837ms)"],"step_count":1}
	
	
	==> gcp-auth [b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac] <==
	2024/04/22 17:00:00 GCP Auth Webhook started!
	2024/04/22 17:00:19 Ready to marshal response ...
	2024/04/22 17:00:19 Ready to write response ...
	2024/04/22 17:00:25 Ready to marshal response ...
	2024/04/22 17:00:25 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:00:53 Ready to marshal response ...
	2024/04/22 17:00:53 Ready to write response ...
	2024/04/22 17:01:11 Ready to marshal response ...
	2024/04/22 17:01:11 Ready to write response ...
	2024/04/22 17:01:34 Ready to marshal response ...
	2024/04/22 17:01:34 Ready to write response ...
	2024/04/22 17:01:52 Ready to marshal response ...
	2024/04/22 17:01:52 Ready to write response ...
	2024/04/22 17:01:52 Ready to marshal response ...
	2024/04/22 17:01:52 Ready to write response ...
	2024/04/22 17:01:52 Ready to marshal response ...
	2024/04/22 17:01:52 Ready to write response ...
	2024/04/22 17:02:00 Ready to marshal response ...
	2024/04/22 17:02:00 Ready to write response ...
	2024/04/22 17:02:45 Ready to marshal response ...
	2024/04/22 17:02:45 Ready to write response ...
	
	
	==> kernel <==
	 17:02:56 up 5 min,  0 users,  load average: 1.04, 1.44, 0.72
	Linux addons-934361 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e] <==
	W0422 17:00:36.583739       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 17:00:36.583941       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0422 17:00:36.584707       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.140.247:443: connect: connection refused
	E0422 17:00:36.589647       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.140.247:443: connect: connection refused
	I0422 17:00:36.651949       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0422 17:01:09.807581       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0422 17:01:26.339156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0422 17:01:51.434856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.434900       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.467718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.467778       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.478486       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.479048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.500328       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.500388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.513264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.513315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0422 17:01:52.480525       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0422 17:01:52.514175       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0422 17:01:52.541447       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0422 17:01:52.690628       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.15.75"}
	E0422 17:02:04.374686       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.135:8443->10.244.0.31:37752: read: connection reset by peer
	I0422 17:02:46.095620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.11.199"}
	E0422 17:02:48.379811       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d] <==
	W0422 17:02:02.055125       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:02.055329       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:02:06.635700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="8.154µs"
	W0422 17:02:09.436597       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:09.436709       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:02:10.691812       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:10.691991       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:02:10.867985       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:10.868089       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:02:12.301139       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:12.301196       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:02:30.807673       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:30.807750       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:02:34.170847       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:34.170927       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:02:35.895115       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:02:35.895150       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:02:45.921776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.28658ms"
	I0422 17:02:45.945381       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="23.517247ms"
	I0422 17:02:45.945466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="34.92µs"
	I0422 17:02:48.154662       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0422 17:02:48.156483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="4.893µs"
	I0422 17:02:48.173697       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0422 17:02:50.407924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="7.085092ms"
	I0422 17:02:50.409271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="45.36µs"
	
	
	==> kube-proxy [2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160] <==
	I0422 16:58:32.286126       1 server_linux.go:69] "Using iptables proxy"
	I0422 16:58:32.303116       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.135"]
	I0422 16:58:32.375432       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 16:58:32.375468       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 16:58:32.375484       1 server_linux.go:165] "Using iptables Proxier"
	I0422 16:58:32.381266       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 16:58:32.381413       1 server.go:872] "Version info" version="v1.30.0"
	I0422 16:58:32.381424       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 16:58:32.384613       1 config.go:192] "Starting service config controller"
	I0422 16:58:32.384625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 16:58:32.384656       1 config.go:101] "Starting endpoint slice config controller"
	I0422 16:58:32.384659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 16:58:32.384986       1 config.go:319] "Starting node config controller"
	I0422 16:58:32.384993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 16:58:32.484756       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 16:58:32.484844       1 shared_informer.go:320] Caches are synced for service config
	I0422 16:58:32.488992       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09] <==
	W0422 16:58:13.901510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 16:58:13.901622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 16:58:13.907903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 16:58:13.908118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 16:58:13.913776       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 16:58:13.913827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 16:58:13.968945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:13.969122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:14.012304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 16:58:14.012571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 16:58:14.047166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 16:58:14.047321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 16:58:14.086948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:14.087059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:14.130758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 16:58:14.131208       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 16:58:14.131745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:14.131875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:14.166989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 16:58:14.168104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 16:58:14.208731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 16:58:14.208783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 16:58:14.340276       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 16:58:14.340389       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 16:58:17.376185       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 17:02:45 addons-934361 kubelet[1286]: I0422 17:02:45.912944    1286 topology_manager.go:215] "Topology Admit Handler" podUID="3f7d61ea-53c0-4922-9b21-e8daf0c21bd7" podNamespace="default" podName="hello-world-app-86c47465fc-zdkzg"
	Apr 22 17:02:45 addons-934361 kubelet[1286]: E0422 17:02:45.913170    1286 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f9e931f2-b963-4451-b2ba-43729c996f37" containerName="helm-test"
	Apr 22 17:02:45 addons-934361 kubelet[1286]: E0422 17:02:45.913192    1286 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8ca5bebc-4067-46c4-b889-2eae5e85437d" containerName="tiller"
	Apr 22 17:02:45 addons-934361 kubelet[1286]: I0422 17:02:45.913238    1286 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e931f2-b963-4451-b2ba-43729c996f37" containerName="helm-test"
	Apr 22 17:02:45 addons-934361 kubelet[1286]: I0422 17:02:45.913245    1286 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca5bebc-4067-46c4-b889-2eae5e85437d" containerName="tiller"
	Apr 22 17:02:45 addons-934361 kubelet[1286]: I0422 17:02:45.949638    1286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92klf\" (UniqueName: \"kubernetes.io/projected/3f7d61ea-53c0-4922-9b21-e8daf0c21bd7-kube-api-access-92klf\") pod \"hello-world-app-86c47465fc-zdkzg\" (UID: \"3f7d61ea-53c0-4922-9b21-e8daf0c21bd7\") " pod="default/hello-world-app-86c47465fc-zdkzg"
	Apr 22 17:02:45 addons-934361 kubelet[1286]: I0422 17:02:45.949763    1286 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3f7d61ea-53c0-4922-9b21-e8daf0c21bd7-gcp-creds\") pod \"hello-world-app-86c47465fc-zdkzg\" (UID: \"3f7d61ea-53c0-4922-9b21-e8daf0c21bd7\") " pod="default/hello-world-app-86c47465fc-zdkzg"
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.160800    1286 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhntd\" (UniqueName: \"kubernetes.io/projected/0a75b318-14a2-4ad7-805f-363d1863bbdb-kube-api-access-nhntd\") pod \"0a75b318-14a2-4ad7-805f-363d1863bbdb\" (UID: \"0a75b318-14a2-4ad7-805f-363d1863bbdb\") "
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.169589    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a75b318-14a2-4ad7-805f-363d1863bbdb-kube-api-access-nhntd" (OuterVolumeSpecName: "kube-api-access-nhntd") pod "0a75b318-14a2-4ad7-805f-363d1863bbdb" (UID: "0a75b318-14a2-4ad7-805f-363d1863bbdb"). InnerVolumeSpecName "kube-api-access-nhntd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.214509    1286 scope.go:117] "RemoveContainer" containerID="e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188"
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.256796    1286 scope.go:117] "RemoveContainer" containerID="e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188"
	Apr 22 17:02:47 addons-934361 kubelet[1286]: E0422 17:02:47.257975    1286 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188\": container with ID starting with e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188 not found: ID does not exist" containerID="e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188"
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.258133    1286 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188"} err="failed to get container status \"e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188\": rpc error: code = NotFound desc = could not find container \"e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188\": container with ID starting with e75e8a295565fd007d95a71f98c66e5770692edb626a2c88c10bf9238d223188 not found: ID does not exist"
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.262562    1286 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nhntd\" (UniqueName: \"kubernetes.io/projected/0a75b318-14a2-4ad7-805f-363d1863bbdb-kube-api-access-nhntd\") on node \"addons-934361\" DevicePath \"\""
	Apr 22 17:02:47 addons-934361 kubelet[1286]: I0422 17:02:47.576529    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a75b318-14a2-4ad7-805f-363d1863bbdb" path="/var/lib/kubelet/pods/0a75b318-14a2-4ad7-805f-363d1863bbdb/volumes"
	Apr 22 17:02:49 addons-934361 kubelet[1286]: I0422 17:02:49.577792    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b42d4d2-2f3e-4bf8-af3d-eec264315cdc" path="/var/lib/kubelet/pods/2b42d4d2-2f3e-4bf8-af3d-eec264315cdc/volumes"
	Apr 22 17:02:49 addons-934361 kubelet[1286]: I0422 17:02:49.578289    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86ce0533-4864-45ac-859b-65b1bc8630ea" path="/var/lib/kubelet/pods/86ce0533-4864-45ac-859b-65b1bc8630ea/volumes"
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.406224    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d315380f6f0c80ead3fb4bf151d25e08896780af9efb8b3da9417b402bc724d9"
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.598507    1286 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6x6v\" (UniqueName: \"kubernetes.io/projected/f4a856af-bd62-4050-b05b-81914b90e27e-kube-api-access-j6x6v\") pod \"f4a856af-bd62-4050-b05b-81914b90e27e\" (UID: \"f4a856af-bd62-4050-b05b-81914b90e27e\") "
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.598582    1286 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4a856af-bd62-4050-b05b-81914b90e27e-webhook-cert\") pod \"f4a856af-bd62-4050-b05b-81914b90e27e\" (UID: \"f4a856af-bd62-4050-b05b-81914b90e27e\") "
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.601312    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a856af-bd62-4050-b05b-81914b90e27e-kube-api-access-j6x6v" (OuterVolumeSpecName: "kube-api-access-j6x6v") pod "f4a856af-bd62-4050-b05b-81914b90e27e" (UID: "f4a856af-bd62-4050-b05b-81914b90e27e"). InnerVolumeSpecName "kube-api-access-j6x6v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.602480    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a856af-bd62-4050-b05b-81914b90e27e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f4a856af-bd62-4050-b05b-81914b90e27e" (UID: "f4a856af-bd62-4050-b05b-81914b90e27e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.699350    1286 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j6x6v\" (UniqueName: \"kubernetes.io/projected/f4a856af-bd62-4050-b05b-81914b90e27e-kube-api-access-j6x6v\") on node \"addons-934361\" DevicePath \"\""
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.699384    1286 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4a856af-bd62-4050-b05b-81914b90e27e-webhook-cert\") on node \"addons-934361\" DevicePath \"\""
	Apr 22 17:02:53 addons-934361 kubelet[1286]: I0422 17:02:53.577191    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a856af-bd62-4050-b05b-81914b90e27e" path="/var/lib/kubelet/pods/f4a856af-bd62-4050-b05b-81914b90e27e/volumes"
	
	
	==> storage-provisioner [b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0] <==
	I0422 16:58:36.971074       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 16:58:37.176696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 16:58:37.176739       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 16:58:37.230118       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 16:58:37.230279       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-934361_8fb7efa7-a319-4388-ae68-1203787d6366!
	I0422 16:58:37.279902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53255fb8-5ed2-42af-aa8d-2f49cb24b17b", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-934361_8fb7efa7-a319-4388-ae68-1203787d6366 became leader
	I0422 16:58:37.431300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-934361_8fb7efa7-a319-4388-ae68-1203787d6366!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-934361 -n addons-934361
helpers_test.go:261: (dbg) Run:  kubectl --context addons-934361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (297.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 23.369104ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-9rwbq" [be72f5e4-ae81-48d6-b57f-d9640e75904a] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011505287s
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (62.600878ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (51.165791ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (69.65209ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (53.212556ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (52.345566ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (68.423875ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 2m19.897959587s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (91.542512ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 2m53.865349393s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (68.93643ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 3m25.230379972s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (69.147134ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 4m1.50293462s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (63.47273ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 4m56.330455012s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (68.385023ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 5m43.48969945s

                                                
                                                
** /stderr **
addons_test.go:415: (dbg) Run:  kubectl --context addons-934361 top pods -n kube-system
addons_test.go:415: (dbg) Non-zero exit: kubectl --context addons-934361 top pods -n kube-system: exit status 1 (64.860377ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-9kl4l, age: 6m33.369030305s

                                                
                                                
** /stderr **
addons_test.go:429: failed checking metric server: exit status 1
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-934361 -n addons-934361
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-934361 logs -n 25: (1.482959265s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-029298                                                                     | download-only-029298 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-330754                                                                     | download-only-330754 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-029298                                                                     | download-only-029298 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-330619 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | binary-mirror-330619                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35745                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-330619                                                                     | binary-mirror-330619 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-934361 --wait=true                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 17:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| ip      | addons-934361 ip                                                                            | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-934361 ssh curl -s                                                                   | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-934361 ssh cat                                                                       | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | /opt/local-path-provisioner/pvc-0ebcd1de-0138-48d2-b5bd-8d480b1e737e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-934361 addons                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | -p addons-934361                                                                            |                      |         |         |                     |                     |
	| addons  | addons-934361 addons                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | -p addons-934361                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | addons-934361                                                                               |                      |         |         |                     |                     |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-934361 ip                                                                            | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-934361 addons disable                                                                | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:02 UTC | 22 Apr 24 17:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-934361 addons                                                                        | addons-934361        | jenkins | v1.33.0 | 22 Apr 24 17:05 UTC | 22 Apr 24 17:05 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:57:32
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:57:32.913167   19497 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:57:32.913443   19497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:32.913454   19497 out.go:304] Setting ErrFile to fd 2...
	I0422 16:57:32.913458   19497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:32.913649   19497 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 16:57:32.914295   19497 out.go:298] Setting JSON to false
	I0422 16:57:32.915237   19497 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2398,"bootTime":1713802655,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 16:57:32.915307   19497 start.go:139] virtualization: kvm guest
	I0422 16:57:32.917660   19497 out.go:177] * [addons-934361] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 16:57:32.920013   19497 notify.go:220] Checking for updates...
	I0422 16:57:32.920030   19497 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 16:57:32.921622   19497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:57:32.922949   19497 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 16:57:32.924377   19497 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:32.926241   19497 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 16:57:32.927908   19497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 16:57:32.929605   19497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:57:32.962505   19497 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 16:57:32.964096   19497 start.go:297] selected driver: kvm2
	I0422 16:57:32.964115   19497 start.go:901] validating driver "kvm2" against <nil>
	I0422 16:57:32.964126   19497 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 16:57:32.964847   19497 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:32.964928   19497 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 16:57:32.980022   19497 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 16:57:32.980067   19497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:57:32.980266   19497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 16:57:32.980331   19497 cni.go:84] Creating CNI manager for ""
	I0422 16:57:32.980343   19497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:57:32.980354   19497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 16:57:32.980410   19497 start.go:340] cluster config:
	{Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:57:32.980492   19497 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:32.983473   19497 out.go:177] * Starting "addons-934361" primary control-plane node in "addons-934361" cluster
	I0422 16:57:32.985031   19497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 16:57:32.985079   19497 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 16:57:32.985097   19497 cache.go:56] Caching tarball of preloaded images
	I0422 16:57:32.985201   19497 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 16:57:32.985213   19497 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 16:57:32.985492   19497 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/config.json ...
	I0422 16:57:32.985512   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/config.json: {Name:mkfb81b895cc31bd1604cd73f5f7b7f89bcc4420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:32.985639   19497 start.go:360] acquireMachinesLock for addons-934361: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 16:57:32.985683   19497 start.go:364] duration metric: took 31.081µs to acquireMachinesLock for "addons-934361"
	I0422 16:57:32.985702   19497 start.go:93] Provisioning new machine with config: &{Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 16:57:32.985757   19497 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 16:57:32.987607   19497 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0422 16:57:32.987744   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:57:32.987783   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:57:33.002165   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0422 16:57:33.002644   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:57:33.003301   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:57:33.003322   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:57:33.003607   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:57:33.003807   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:33.003944   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:33.004092   19497 start.go:159] libmachine.API.Create for "addons-934361" (driver="kvm2")
	I0422 16:57:33.004139   19497 client.go:168] LocalClient.Create starting
	I0422 16:57:33.004189   19497 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 16:57:33.157525   19497 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 16:57:33.233084   19497 main.go:141] libmachine: Running pre-create checks...
	I0422 16:57:33.233108   19497 main.go:141] libmachine: (addons-934361) Calling .PreCreateCheck
	I0422 16:57:33.233603   19497 main.go:141] libmachine: (addons-934361) Calling .GetConfigRaw
	I0422 16:57:33.234046   19497 main.go:141] libmachine: Creating machine...
	I0422 16:57:33.234059   19497 main.go:141] libmachine: (addons-934361) Calling .Create
	I0422 16:57:33.234210   19497 main.go:141] libmachine: (addons-934361) Creating KVM machine...
	I0422 16:57:33.235492   19497 main.go:141] libmachine: (addons-934361) DBG | found existing default KVM network
	I0422 16:57:33.236263   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.236117   19519 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001831f0}
	I0422 16:57:33.236302   19497 main.go:141] libmachine: (addons-934361) DBG | created network xml: 
	I0422 16:57:33.236320   19497 main.go:141] libmachine: (addons-934361) DBG | <network>
	I0422 16:57:33.236336   19497 main.go:141] libmachine: (addons-934361) DBG |   <name>mk-addons-934361</name>
	I0422 16:57:33.236344   19497 main.go:141] libmachine: (addons-934361) DBG |   <dns enable='no'/>
	I0422 16:57:33.236366   19497 main.go:141] libmachine: (addons-934361) DBG |   
	I0422 16:57:33.236386   19497 main.go:141] libmachine: (addons-934361) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 16:57:33.236420   19497 main.go:141] libmachine: (addons-934361) DBG |     <dhcp>
	I0422 16:57:33.236432   19497 main.go:141] libmachine: (addons-934361) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 16:57:33.236442   19497 main.go:141] libmachine: (addons-934361) DBG |     </dhcp>
	I0422 16:57:33.236460   19497 main.go:141] libmachine: (addons-934361) DBG |   </ip>
	I0422 16:57:33.236473   19497 main.go:141] libmachine: (addons-934361) DBG |   
	I0422 16:57:33.236484   19497 main.go:141] libmachine: (addons-934361) DBG | </network>
	I0422 16:57:33.236495   19497 main.go:141] libmachine: (addons-934361) DBG | 
	I0422 16:57:33.242006   19497 main.go:141] libmachine: (addons-934361) DBG | trying to create private KVM network mk-addons-934361 192.168.39.0/24...
	I0422 16:57:33.310405   19497 main.go:141] libmachine: (addons-934361) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361 ...
	I0422 16:57:33.310441   19497 main.go:141] libmachine: (addons-934361) DBG | private KVM network mk-addons-934361 192.168.39.0/24 created
	I0422 16:57:33.310455   19497 main.go:141] libmachine: (addons-934361) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 16:57:33.310481   19497 main.go:141] libmachine: (addons-934361) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 16:57:33.310572   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.310240   19519 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:33.558499   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.558343   19519 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa...
	I0422 16:57:33.646061   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.645931   19519 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/addons-934361.rawdisk...
	I0422 16:57:33.646092   19497 main.go:141] libmachine: (addons-934361) DBG | Writing magic tar header
	I0422 16:57:33.646103   19497 main.go:141] libmachine: (addons-934361) DBG | Writing SSH key tar header
	I0422 16:57:33.646111   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:33.646050   19519 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361 ...
	I0422 16:57:33.646210   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361
	I0422 16:57:33.646245   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361 (perms=drwx------)
	I0422 16:57:33.646257   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 16:57:33.646272   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:33.646281   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 16:57:33.646291   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 16:57:33.646299   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home/jenkins
	I0422 16:57:33.646309   19497 main.go:141] libmachine: (addons-934361) DBG | Checking permissions on dir: /home
	I0422 16:57:33.646317   19497 main.go:141] libmachine: (addons-934361) DBG | Skipping /home - not owner
	I0422 16:57:33.646332   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 16:57:33.646358   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 16:57:33.646376   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 16:57:33.646388   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 16:57:33.646397   19497 main.go:141] libmachine: (addons-934361) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 16:57:33.646405   19497 main.go:141] libmachine: (addons-934361) Creating domain...
	I0422 16:57:33.647547   19497 main.go:141] libmachine: (addons-934361) define libvirt domain using xml: 
	I0422 16:57:33.647575   19497 main.go:141] libmachine: (addons-934361) <domain type='kvm'>
	I0422 16:57:33.647585   19497 main.go:141] libmachine: (addons-934361)   <name>addons-934361</name>
	I0422 16:57:33.647593   19497 main.go:141] libmachine: (addons-934361)   <memory unit='MiB'>4000</memory>
	I0422 16:57:33.647603   19497 main.go:141] libmachine: (addons-934361)   <vcpu>2</vcpu>
	I0422 16:57:33.647614   19497 main.go:141] libmachine: (addons-934361)   <features>
	I0422 16:57:33.647622   19497 main.go:141] libmachine: (addons-934361)     <acpi/>
	I0422 16:57:33.647633   19497 main.go:141] libmachine: (addons-934361)     <apic/>
	I0422 16:57:33.647644   19497 main.go:141] libmachine: (addons-934361)     <pae/>
	I0422 16:57:33.647651   19497 main.go:141] libmachine: (addons-934361)     
	I0422 16:57:33.647662   19497 main.go:141] libmachine: (addons-934361)   </features>
	I0422 16:57:33.647670   19497 main.go:141] libmachine: (addons-934361)   <cpu mode='host-passthrough'>
	I0422 16:57:33.647684   19497 main.go:141] libmachine: (addons-934361)   
	I0422 16:57:33.647698   19497 main.go:141] libmachine: (addons-934361)   </cpu>
	I0422 16:57:33.647706   19497 main.go:141] libmachine: (addons-934361)   <os>
	I0422 16:57:33.647711   19497 main.go:141] libmachine: (addons-934361)     <type>hvm</type>
	I0422 16:57:33.647719   19497 main.go:141] libmachine: (addons-934361)     <boot dev='cdrom'/>
	I0422 16:57:33.647741   19497 main.go:141] libmachine: (addons-934361)     <boot dev='hd'/>
	I0422 16:57:33.647754   19497 main.go:141] libmachine: (addons-934361)     <bootmenu enable='no'/>
	I0422 16:57:33.647758   19497 main.go:141] libmachine: (addons-934361)   </os>
	I0422 16:57:33.647784   19497 main.go:141] libmachine: (addons-934361)   <devices>
	I0422 16:57:33.647808   19497 main.go:141] libmachine: (addons-934361)     <disk type='file' device='cdrom'>
	I0422 16:57:33.647829   19497 main.go:141] libmachine: (addons-934361)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/boot2docker.iso'/>
	I0422 16:57:33.647838   19497 main.go:141] libmachine: (addons-934361)       <target dev='hdc' bus='scsi'/>
	I0422 16:57:33.647851   19497 main.go:141] libmachine: (addons-934361)       <readonly/>
	I0422 16:57:33.647862   19497 main.go:141] libmachine: (addons-934361)     </disk>
	I0422 16:57:33.647872   19497 main.go:141] libmachine: (addons-934361)     <disk type='file' device='disk'>
	I0422 16:57:33.647891   19497 main.go:141] libmachine: (addons-934361)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 16:57:33.647906   19497 main.go:141] libmachine: (addons-934361)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/addons-934361.rawdisk'/>
	I0422 16:57:33.647919   19497 main.go:141] libmachine: (addons-934361)       <target dev='hda' bus='virtio'/>
	I0422 16:57:33.647926   19497 main.go:141] libmachine: (addons-934361)     </disk>
	I0422 16:57:33.647936   19497 main.go:141] libmachine: (addons-934361)     <interface type='network'>
	I0422 16:57:33.647947   19497 main.go:141] libmachine: (addons-934361)       <source network='mk-addons-934361'/>
	I0422 16:57:33.647960   19497 main.go:141] libmachine: (addons-934361)       <model type='virtio'/>
	I0422 16:57:33.647971   19497 main.go:141] libmachine: (addons-934361)     </interface>
	I0422 16:57:33.647983   19497 main.go:141] libmachine: (addons-934361)     <interface type='network'>
	I0422 16:57:33.647995   19497 main.go:141] libmachine: (addons-934361)       <source network='default'/>
	I0422 16:57:33.648006   19497 main.go:141] libmachine: (addons-934361)       <model type='virtio'/>
	I0422 16:57:33.648018   19497 main.go:141] libmachine: (addons-934361)     </interface>
	I0422 16:57:33.648026   19497 main.go:141] libmachine: (addons-934361)     <serial type='pty'>
	I0422 16:57:33.648034   19497 main.go:141] libmachine: (addons-934361)       <target port='0'/>
	I0422 16:57:33.648042   19497 main.go:141] libmachine: (addons-934361)     </serial>
	I0422 16:57:33.648047   19497 main.go:141] libmachine: (addons-934361)     <console type='pty'>
	I0422 16:57:33.648055   19497 main.go:141] libmachine: (addons-934361)       <target type='serial' port='0'/>
	I0422 16:57:33.648062   19497 main.go:141] libmachine: (addons-934361)     </console>
	I0422 16:57:33.648068   19497 main.go:141] libmachine: (addons-934361)     <rng model='virtio'>
	I0422 16:57:33.648076   19497 main.go:141] libmachine: (addons-934361)       <backend model='random'>/dev/random</backend>
	I0422 16:57:33.648083   19497 main.go:141] libmachine: (addons-934361)     </rng>
	I0422 16:57:33.648089   19497 main.go:141] libmachine: (addons-934361)     
	I0422 16:57:33.648117   19497 main.go:141] libmachine: (addons-934361)     
	I0422 16:57:33.648138   19497 main.go:141] libmachine: (addons-934361)   </devices>
	I0422 16:57:33.648151   19497 main.go:141] libmachine: (addons-934361) </domain>
	I0422 16:57:33.648160   19497 main.go:141] libmachine: (addons-934361) 
	I0422 16:57:33.654106   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:4c:a2:07 in network default
	I0422 16:57:33.654745   19497 main.go:141] libmachine: (addons-934361) Ensuring networks are active...
	I0422 16:57:33.654762   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:33.655480   19497 main.go:141] libmachine: (addons-934361) Ensuring network default is active
	I0422 16:57:33.655867   19497 main.go:141] libmachine: (addons-934361) Ensuring network mk-addons-934361 is active
	I0422 16:57:33.656457   19497 main.go:141] libmachine: (addons-934361) Getting domain xml...
	I0422 16:57:33.657064   19497 main.go:141] libmachine: (addons-934361) Creating domain...
	I0422 16:57:35.064105   19497 main.go:141] libmachine: (addons-934361) Waiting to get IP...
	I0422 16:57:35.064943   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.065402   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.065447   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.065371   19519 retry.go:31] will retry after 196.289335ms: waiting for machine to come up
	I0422 16:57:35.262878   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.263318   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.263351   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.263261   19519 retry.go:31] will retry after 329.965242ms: waiting for machine to come up
	I0422 16:57:35.594897   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.595516   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.595557   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.595471   19519 retry.go:31] will retry after 323.084257ms: waiting for machine to come up
	I0422 16:57:35.919988   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:35.920439   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:35.920461   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:35.920404   19519 retry.go:31] will retry after 530.948858ms: waiting for machine to come up
	I0422 16:57:36.453183   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:36.453601   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:36.453631   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:36.453540   19519 retry.go:31] will retry after 631.595219ms: waiting for machine to come up
	I0422 16:57:37.086388   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:37.086741   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:37.086767   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:37.086701   19519 retry.go:31] will retry after 816.177659ms: waiting for machine to come up
	I0422 16:57:37.904194   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:37.904562   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:37.904589   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:37.904521   19519 retry.go:31] will retry after 920.390325ms: waiting for machine to come up
	I0422 16:57:38.826553   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:38.826989   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:38.827034   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:38.826953   19519 retry.go:31] will retry after 1.134107914s: waiting for machine to come up
	I0422 16:57:39.963410   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:39.963825   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:39.963852   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:39.963778   19519 retry.go:31] will retry after 1.219492702s: waiting for machine to come up
	I0422 16:57:41.185380   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:41.185754   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:41.185776   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:41.185708   19519 retry.go:31] will retry after 1.58783081s: waiting for machine to come up
	I0422 16:57:42.775763   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:42.776237   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:42.776269   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:42.776191   19519 retry.go:31] will retry after 2.643870295s: waiting for machine to come up
	I0422 16:57:45.423145   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:45.423608   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:45.423638   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:45.423553   19519 retry.go:31] will retry after 2.886737467s: waiting for machine to come up
	I0422 16:57:48.312273   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:48.312843   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:48.312869   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:48.312792   19519 retry.go:31] will retry after 3.559179926s: waiting for machine to come up
	I0422 16:57:51.876561   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:51.877079   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find current IP address of domain addons-934361 in network mk-addons-934361
	I0422 16:57:51.877103   19497 main.go:141] libmachine: (addons-934361) DBG | I0422 16:57:51.877016   19519 retry.go:31] will retry after 4.672115704s: waiting for machine to come up
	I0422 16:57:56.555319   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.555818   19497 main.go:141] libmachine: (addons-934361) Found IP for machine: 192.168.39.135
	I0422 16:57:56.555842   19497 main.go:141] libmachine: (addons-934361) Reserving static IP address...
	I0422 16:57:56.555857   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has current primary IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.556164   19497 main.go:141] libmachine: (addons-934361) DBG | unable to find host DHCP lease matching {name: "addons-934361", mac: "52:54:00:34:5f:36", ip: "192.168.39.135"} in network mk-addons-934361
	I0422 16:57:56.632283   19497 main.go:141] libmachine: (addons-934361) DBG | Getting to WaitForSSH function...
	I0422 16:57:56.632316   19497 main.go:141] libmachine: (addons-934361) Reserved static IP address: 192.168.39.135
	I0422 16:57:56.632337   19497 main.go:141] libmachine: (addons-934361) Waiting for SSH to be available...
	I0422 16:57:56.634699   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.635034   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:56.635062   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.635330   19497 main.go:141] libmachine: (addons-934361) DBG | Using SSH client type: external
	I0422 16:57:56.635352   19497 main.go:141] libmachine: (addons-934361) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa (-rw-------)
	I0422 16:57:56.635384   19497 main.go:141] libmachine: (addons-934361) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 16:57:56.635402   19497 main.go:141] libmachine: (addons-934361) DBG | About to run SSH command:
	I0422 16:57:56.635422   19497 main.go:141] libmachine: (addons-934361) DBG | exit 0
	I0422 16:57:56.772006   19497 main.go:141] libmachine: (addons-934361) DBG | SSH cmd err, output: <nil>: 
	I0422 16:57:56.772370   19497 main.go:141] libmachine: (addons-934361) KVM machine creation complete!
	I0422 16:57:56.772674   19497 main.go:141] libmachine: (addons-934361) Calling .GetConfigRaw
	I0422 16:57:56.773187   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:56.773436   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:56.773659   19497 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 16:57:56.773681   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:57:56.775180   19497 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 16:57:56.775198   19497 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 16:57:56.775206   19497 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 16:57:56.775214   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:56.777738   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.778089   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:56.778135   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.778248   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:56.778459   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.778644   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.778845   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:56.779028   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:56.779239   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:56.779252   19497 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 16:57:56.890411   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 16:57:56.890433   19497 main.go:141] libmachine: Detecting the provisioner...
	I0422 16:57:56.890441   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:56.892911   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.893187   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:56.893216   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:56.893354   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:56.893537   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.893697   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:56.893824   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:56.893962   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:56.894154   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:56.894167   19497 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 16:57:57.008025   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 16:57:57.008103   19497 main.go:141] libmachine: found compatible host: buildroot
	I0422 16:57:57.008118   19497 main.go:141] libmachine: Provisioning with buildroot...
	I0422 16:57:57.008130   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:57.008409   19497 buildroot.go:166] provisioning hostname "addons-934361"
	I0422 16:57:57.008434   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:57.008607   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.010808   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.011117   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.011155   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.011293   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.011482   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.011654   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.011790   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.011940   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:57.012116   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:57.012129   19497 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-934361 && echo "addons-934361" | sudo tee /etc/hostname
	I0422 16:57:57.138442   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-934361
	
	I0422 16:57:57.138469   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.140913   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.141175   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.141206   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.141357   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.141600   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.141820   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.141978   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.142135   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:57.142301   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:57.142323   19497 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-934361' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-934361/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-934361' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 16:57:57.265945   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 16:57:57.265971   19497 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 16:57:57.266003   19497 buildroot.go:174] setting up certificates
	I0422 16:57:57.266018   19497 provision.go:84] configureAuth start
	I0422 16:57:57.266030   19497 main.go:141] libmachine: (addons-934361) Calling .GetMachineName
	I0422 16:57:57.266309   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:57.268905   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.269231   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.269262   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.269383   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.271340   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.271729   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.271759   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.271850   19497 provision.go:143] copyHostCerts
	I0422 16:57:57.271924   19497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 16:57:57.272087   19497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 16:57:57.272169   19497 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 16:57:57.272217   19497 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.addons-934361 san=[127.0.0.1 192.168.39.135 addons-934361 localhost minikube]
	I0422 16:57:57.434206   19497 provision.go:177] copyRemoteCerts
	I0422 16:57:57.434281   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 16:57:57.434305   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.437182   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.437549   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.437576   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.437763   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.437944   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.438097   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.438293   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:57.527105   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 16:57:57.553360   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 16:57:57.580930   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 16:57:57.606784   19497 provision.go:87] duration metric: took 340.753357ms to configureAuth
	I0422 16:57:57.606817   19497 buildroot.go:189] setting minikube options for container-runtime
	I0422 16:57:57.607047   19497 config.go:182] Loaded profile config "addons-934361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 16:57:57.607118   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.610035   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.610408   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.610438   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.610606   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.610812   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.610954   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.611071   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.611215   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:57.611368   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:57.611382   19497 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 16:57:57.906243   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 16:57:57.906269   19497 main.go:141] libmachine: Checking connection to Docker...
	I0422 16:57:57.906277   19497 main.go:141] libmachine: (addons-934361) Calling .GetURL
	I0422 16:57:57.907607   19497 main.go:141] libmachine: (addons-934361) DBG | Using libvirt version 6000000
	I0422 16:57:57.909801   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.910133   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.910176   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.910336   19497 main.go:141] libmachine: Docker is up and running!
	I0422 16:57:57.910359   19497 main.go:141] libmachine: Reticulating splines...
	I0422 16:57:57.910446   19497 client.go:171] duration metric: took 24.906287596s to LocalClient.Create
	I0422 16:57:57.910477   19497 start.go:167] duration metric: took 24.906383498s to libmachine.API.Create "addons-934361"
	I0422 16:57:57.910495   19497 start.go:293] postStartSetup for "addons-934361" (driver="kvm2")
	I0422 16:57:57.910511   19497 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 16:57:57.910536   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:57.910762   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 16:57:57.910784   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:57.912827   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.913165   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:57.913195   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:57.913309   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:57.913484   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:57.915346   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:57.915516   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:58.002575   19497 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 16:57:58.006932   19497 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 16:57:58.006956   19497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 16:57:58.007053   19497 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 16:57:58.007089   19497 start.go:296] duration metric: took 96.583636ms for postStartSetup
	I0422 16:57:58.007147   19497 main.go:141] libmachine: (addons-934361) Calling .GetConfigRaw
	I0422 16:57:58.007638   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:58.009887   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.010260   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.010288   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.010507   19497 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/config.json ...
	I0422 16:57:58.010668   19497 start.go:128] duration metric: took 25.024900859s to createHost
	I0422 16:57:58.010689   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:58.012620   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.012910   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.012934   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.013027   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:58.013181   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.013293   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.013392   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:58.013573   19497 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:58.013719   19497 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.135 22 <nil> <nil>}
	I0422 16:57:58.013730   19497 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 16:57:58.128279   19497 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713805078.111978304
	
	I0422 16:57:58.128312   19497 fix.go:216] guest clock: 1713805078.111978304
	I0422 16:57:58.128324   19497 fix.go:229] Guest: 2024-04-22 16:57:58.111978304 +0000 UTC Remote: 2024-04-22 16:57:58.010678611 +0000 UTC m=+25.143897313 (delta=101.299693ms)
	I0422 16:57:58.128353   19497 fix.go:200] guest clock delta is within tolerance: 101.299693ms
	I0422 16:57:58.128361   19497 start.go:83] releasing machines lock for "addons-934361", held for 25.142666377s
	I0422 16:57:58.128389   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.128668   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:58.131304   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.131683   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.131731   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.131892   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.132417   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.132620   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:57:58.132690   19497 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 16:57:58.132734   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:58.132987   19497 ssh_runner.go:195] Run: cat /version.json
	I0422 16:57:58.133046   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:57:58.135755   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.135911   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.136185   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.136210   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.136269   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:58.136291   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:58.136446   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:58.136583   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:57:58.136660   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.136750   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:57:58.136819   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:58.136932   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:57:58.136979   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:58.137078   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:57:58.265754   19497 ssh_runner.go:195] Run: systemctl --version
	I0422 16:57:58.272127   19497 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 16:57:58.438948   19497 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 16:57:58.445801   19497 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 16:57:58.445886   19497 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 16:57:58.462759   19497 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 16:57:58.462807   19497 start.go:494] detecting cgroup driver to use...
	I0422 16:57:58.462865   19497 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 16:57:58.482589   19497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 16:57:58.496926   19497 docker.go:217] disabling cri-docker service (if available) ...
	I0422 16:57:58.496993   19497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 16:57:58.510534   19497 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 16:57:58.524640   19497 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 16:57:58.643412   19497 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 16:57:58.778594   19497 docker.go:233] disabling docker service ...
	I0422 16:57:58.778671   19497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 16:57:58.794473   19497 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 16:57:58.807555   19497 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 16:57:58.946799   19497 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 16:57:59.088635   19497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 16:57:59.103507   19497 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 16:57:59.123395   19497 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 16:57:59.123462   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.134619   19497 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 16:57:59.134693   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.146372   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.158324   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.170135   19497 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 16:57:59.182367   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.194501   19497 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.214253   19497 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 16:57:59.227940   19497 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 16:57:59.240245   19497 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 16:57:59.240303   19497 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 16:57:59.258497   19497 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 16:57:59.270917   19497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:59.422933   19497 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 16:57:59.570460   19497 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 16:57:59.570542   19497 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 16:57:59.576139   19497 start.go:562] Will wait 60s for crictl version
	I0422 16:57:59.576230   19497 ssh_runner.go:195] Run: which crictl
	I0422 16:57:59.580266   19497 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 16:57:59.614859   19497 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 16:57:59.614990   19497 ssh_runner.go:195] Run: crio --version
	I0422 16:57:59.644280   19497 ssh_runner.go:195] Run: crio --version
	I0422 16:57:59.676250   19497 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 16:57:59.677940   19497 main.go:141] libmachine: (addons-934361) Calling .GetIP
	I0422 16:57:59.680787   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:59.681101   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:57:59.681133   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:57:59.681326   19497 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 16:57:59.685758   19497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 16:57:59.698946   19497 kubeadm.go:877] updating cluster {Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 16:57:59.699046   19497 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 16:57:59.699084   19497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 16:57:59.737065   19497 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 16:57:59.737130   19497 ssh_runner.go:195] Run: which lz4
	I0422 16:57:59.741558   19497 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 16:57:59.746192   19497 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 16:57:59.746232   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 16:58:01.202590   19497 crio.go:462] duration metric: took 1.461060178s to copy over tarball
	I0422 16:58:01.202666   19497 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 16:58:03.638220   19497 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.435508535s)
	I0422 16:58:03.638258   19497 crio.go:469] duration metric: took 2.435643243s to extract the tarball
	I0422 16:58:03.638265   19497 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 16:58:03.675619   19497 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 16:58:03.721783   19497 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 16:58:03.721812   19497 cache_images.go:84] Images are preloaded, skipping loading
	I0422 16:58:03.721821   19497 kubeadm.go:928] updating node { 192.168.39.135 8443 v1.30.0 crio true true} ...
	I0422 16:58:03.721938   19497 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-934361 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 16:58:03.722008   19497 ssh_runner.go:195] Run: crio config
	I0422 16:58:03.770282   19497 cni.go:84] Creating CNI manager for ""
	I0422 16:58:03.770315   19497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:58:03.770340   19497 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 16:58:03.770363   19497 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.135 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-934361 NodeName:addons-934361 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 16:58:03.770501   19497 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-934361"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 16:58:03.770570   19497 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 16:58:03.781382   19497 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 16:58:03.781456   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 16:58:03.791904   19497 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0422 16:58:03.810753   19497 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 16:58:03.830138   19497 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0422 16:58:03.851812   19497 ssh_runner.go:195] Run: grep 192.168.39.135	control-plane.minikube.internal$ /etc/hosts
	I0422 16:58:03.855935   19497 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 16:58:03.869134   19497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:58:04.004126   19497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 16:58:04.023200   19497 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361 for IP: 192.168.39.135
	I0422 16:58:04.023232   19497 certs.go:194] generating shared ca certs ...
	I0422 16:58:04.023260   19497 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.023423   19497 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 16:58:04.169771   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt ...
	I0422 16:58:04.169799   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt: {Name:mk733199a2acd8a83bf9ab3c6df11b5053cc823a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.169976   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key ...
	I0422 16:58:04.169990   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key: {Name:mkb06c6d181caf61810be6bbb1655d5e3186dd47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.170061   19497 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 16:58:04.440848   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt ...
	I0422 16:58:04.440884   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt: {Name:mka8fab8fb90853c7953652d2abd820aa5f16fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.441032   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key ...
	I0422 16:58:04.441044   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key: {Name:mke3e87318f9834e9111317ba9236faea4c0aa13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.441110   19497 certs.go:256] generating profile certs ...
	I0422 16:58:04.441162   19497 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.key
	I0422 16:58:04.441174   19497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt with IP's: []
	I0422 16:58:04.685380   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt ...
	I0422 16:58:04.685413   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: {Name:mk7aaf8ea5151f336baa6fd63646eb55b3c1f3e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.685575   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.key ...
	I0422 16:58:04.685586   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.key: {Name:mk1d89d43202fa99577c60b6a234332203615819 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.685647   19497 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e
	I0422 16:58:04.685664   19497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.135]
	I0422 16:58:04.813975   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e ...
	I0422 16:58:04.814012   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e: {Name:mkb421f1c658ace3f5a29849fbe6303faea94ab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.814173   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e ...
	I0422 16:58:04.814189   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e: {Name:mk56953efdb0d28ca9acd9032a390bf1a26f1f6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:04.814253   19497 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt.dadc656e -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt
	I0422 16:58:04.814347   19497 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key.dadc656e -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key
	I0422 16:58:04.814395   19497 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key
	I0422 16:58:04.814414   19497 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt with IP's: []
	I0422 16:58:05.054913   19497 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt ...
	I0422 16:58:05.054945   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt: {Name:mke240646e716d28e8ed5c155cf66f0d7b90a640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:05.055105   19497 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key ...
	I0422 16:58:05.055117   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key: {Name:mke4d7b005de65f852f8891254497ff22e8f52e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:05.055295   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 16:58:05.055330   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 16:58:05.055357   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 16:58:05.055377   19497 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 16:58:05.055970   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 16:58:05.084644   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 16:58:05.111750   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 16:58:05.137857   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 16:58:05.163550   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 16:58:05.189830   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 16:58:05.216714   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 16:58:05.242797   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 16:58:05.268849   19497 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 16:58:05.295030   19497 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 16:58:05.313884   19497 ssh_runner.go:195] Run: openssl version
	I0422 16:58:05.320178   19497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 16:58:05.333133   19497 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:58:05.338116   19497 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:58:05.338178   19497 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:58:05.344694   19497 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 16:58:05.356791   19497 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 16:58:05.361082   19497 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 16:58:05.361130   19497 kubeadm.go:391] StartCluster: {Name:addons-934361 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:addons-934361 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:58:05.361197   19497 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 16:58:05.361242   19497 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 16:58:05.403060   19497 cri.go:89] found id: ""
	I0422 16:58:05.403153   19497 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 16:58:05.416874   19497 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 16:58:05.438650   19497 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 16:58:05.455170   19497 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 16:58:05.455191   19497 kubeadm.go:156] found existing configuration files:
	
	I0422 16:58:05.455235   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 16:58:05.467544   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 16:58:05.467619   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 16:58:05.487567   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 16:58:05.497552   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 16:58:05.497616   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 16:58:05.507989   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 16:58:05.517784   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 16:58:05.517839   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 16:58:05.528132   19497 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 16:58:05.538030   19497 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 16:58:05.538096   19497 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 16:58:05.548704   19497 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 16:58:05.722346   19497 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 16:58:16.196199   19497 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 16:58:16.196342   19497 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 16:58:16.196445   19497 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 16:58:16.196531   19497 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 16:58:16.196645   19497 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 16:58:16.196732   19497 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 16:58:16.198434   19497 out.go:204]   - Generating certificates and keys ...
	I0422 16:58:16.198517   19497 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 16:58:16.198573   19497 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 16:58:16.198639   19497 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 16:58:16.198698   19497 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 16:58:16.198754   19497 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 16:58:16.198797   19497 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 16:58:16.198890   19497 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 16:58:16.199029   19497 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-934361 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0422 16:58:16.199086   19497 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 16:58:16.199237   19497 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-934361 localhost] and IPs [192.168.39.135 127.0.0.1 ::1]
	I0422 16:58:16.199323   19497 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 16:58:16.199415   19497 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 16:58:16.199482   19497 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 16:58:16.199559   19497 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 16:58:16.199637   19497 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 16:58:16.199714   19497 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 16:58:16.199797   19497 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 16:58:16.199898   19497 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 16:58:16.200011   19497 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 16:58:16.200137   19497 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 16:58:16.200229   19497 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 16:58:16.202026   19497 out.go:204]   - Booting up control plane ...
	I0422 16:58:16.202149   19497 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 16:58:16.202244   19497 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 16:58:16.202342   19497 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 16:58:16.202475   19497 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 16:58:16.202588   19497 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 16:58:16.202666   19497 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 16:58:16.202818   19497 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 16:58:16.202907   19497 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 16:58:16.202998   19497 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.057478ms
	I0422 16:58:16.203103   19497 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 16:58:16.203200   19497 kubeadm.go:309] [api-check] The API server is healthy after 5.001477785s
	I0422 16:58:16.203356   19497 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 16:58:16.203511   19497 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 16:58:16.203594   19497 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 16:58:16.203825   19497 kubeadm.go:309] [mark-control-plane] Marking the node addons-934361 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 16:58:16.203906   19497 kubeadm.go:309] [bootstrap-token] Using token: umwtaq.jexr9s59rhfkzxx5
	I0422 16:58:16.205516   19497 out.go:204]   - Configuring RBAC rules ...
	I0422 16:58:16.205668   19497 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 16:58:16.205751   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 16:58:16.205905   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 16:58:16.206054   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 16:58:16.206188   19497 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 16:58:16.206311   19497 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 16:58:16.206465   19497 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 16:58:16.206528   19497 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 16:58:16.206602   19497 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 16:58:16.206610   19497 kubeadm.go:309] 
	I0422 16:58:16.206691   19497 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 16:58:16.206700   19497 kubeadm.go:309] 
	I0422 16:58:16.206805   19497 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 16:58:16.206813   19497 kubeadm.go:309] 
	I0422 16:58:16.206858   19497 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 16:58:16.206948   19497 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 16:58:16.207021   19497 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 16:58:16.207030   19497 kubeadm.go:309] 
	I0422 16:58:16.207109   19497 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 16:58:16.207117   19497 kubeadm.go:309] 
	I0422 16:58:16.207207   19497 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 16:58:16.207216   19497 kubeadm.go:309] 
	I0422 16:58:16.207289   19497 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 16:58:16.207407   19497 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 16:58:16.207505   19497 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 16:58:16.207523   19497 kubeadm.go:309] 
	I0422 16:58:16.207642   19497 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 16:58:16.207746   19497 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 16:58:16.207755   19497 kubeadm.go:309] 
	I0422 16:58:16.207867   19497 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token umwtaq.jexr9s59rhfkzxx5 \
	I0422 16:58:16.208024   19497 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 16:58:16.208058   19497 kubeadm.go:309] 	--control-plane 
	I0422 16:58:16.208067   19497 kubeadm.go:309] 
	I0422 16:58:16.208192   19497 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 16:58:16.208207   19497 kubeadm.go:309] 
	I0422 16:58:16.208283   19497 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token umwtaq.jexr9s59rhfkzxx5 \
	I0422 16:58:16.208393   19497 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 16:58:16.208407   19497 cni.go:84] Creating CNI manager for ""
	I0422 16:58:16.208420   19497 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:58:16.210095   19497 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 16:58:16.211364   19497 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 16:58:16.225429   19497 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 16:58:16.246994   19497 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 16:58:16.247077   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:16.247111   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-934361 minikube.k8s.io/updated_at=2024_04_22T16_58_16_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=addons-934361 minikube.k8s.io/primary=true
	I0422 16:58:16.283016   19497 ops.go:34] apiserver oom_adj: -16
	I0422 16:58:16.402894   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:16.903825   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:17.403008   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:17.903778   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:18.403481   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:18.903898   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:19.403618   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:19.903105   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:20.403758   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:20.903842   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:21.403746   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:21.903760   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.403521   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.903667   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:23.403588   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:23.903636   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:24.403082   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:24.903653   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:25.402990   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:25.903142   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:26.403580   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:26.902897   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:27.403484   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:27.903633   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:28.402935   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:28.902981   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:29.403676   19497 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:29.617693   19497 kubeadm.go:1107] duration metric: took 13.370678652s to wait for elevateKubeSystemPrivileges
	W0422 16:58:29.617726   19497 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 16:58:29.617734   19497 kubeadm.go:393] duration metric: took 24.256607098s to StartCluster
	I0422 16:58:29.617754   19497 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:29.617864   19497 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 16:58:29.618244   19497 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:29.618453   19497 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.135 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 16:58:29.620700   19497 out.go:177] * Verifying Kubernetes components...
	I0422 16:58:29.618485   19497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 16:58:29.618499   19497 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0422 16:58:29.618689   19497 config.go:182] Loaded profile config "addons-934361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 16:58:29.620802   19497 addons.go:69] Setting cloud-spanner=true in profile "addons-934361"
	I0422 16:58:29.620823   19497 addons.go:69] Setting yakd=true in profile "addons-934361"
	I0422 16:58:29.620846   19497 addons.go:234] Setting addon yakd=true in "addons-934361"
	I0422 16:58:29.620849   19497 addons.go:234] Setting addon cloud-spanner=true in "addons-934361"
	I0422 16:58:29.620878   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620879   19497 addons.go:69] Setting registry=true in profile "addons-934361"
	I0422 16:58:29.620884   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620900   19497 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-934361"
	I0422 16:58:29.620899   19497 addons.go:234] Setting addon registry=true in "addons-934361"
	I0422 16:58:29.620927   19497 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-934361"
	I0422 16:58:29.620938   19497 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-934361"
	I0422 16:58:29.620938   19497 addons.go:69] Setting metrics-server=true in profile "addons-934361"
	I0422 16:58:29.620957   19497 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-934361"
	I0422 16:58:29.620978   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620989   19497 addons.go:69] Setting inspektor-gadget=true in profile "addons-934361"
	I0422 16:58:29.621005   19497 addons.go:234] Setting addon inspektor-gadget=true in "addons-934361"
	I0422 16:58:29.621036   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621323   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621393   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621409   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.620929   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621456   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621479   19497 addons.go:69] Setting ingress-dns=true in profile "addons-934361"
	I0422 16:58:29.621529   19497 addons.go:234] Setting addon ingress-dns=true in "addons-934361"
	I0422 16:58:29.621576   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.620979   19497 addons.go:234] Setting addon metrics-server=true in "addons-934361"
	I0422 16:58:29.621649   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621334   19497 addons.go:69] Setting volumesnapshots=true in profile "addons-934361"
	I0422 16:58:29.621755   19497 addons.go:234] Setting addon volumesnapshots=true in "addons-934361"
	I0422 16:58:29.621780   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.621787   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621339   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621348   19497 addons.go:69] Setting gcp-auth=true in profile "addons-934361"
	I0422 16:58:29.623709   19497 mustload.go:65] Loading cluster: addons-934361
	I0422 16:58:29.621351   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621354   19497 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-934361"
	I0422 16:58:29.621352   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.621358   19497 addons.go:69] Setting default-storageclass=true in profile "addons-934361"
	I0422 16:58:29.623897   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623903   19497 config.go:182] Loaded profile config "addons-934361": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 16:58:29.623922   19497 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-934361"
	I0422 16:58:29.621362   19497 addons.go:69] Setting storage-provisioner=true in profile "addons-934361"
	I0422 16:58:29.623991   19497 addons.go:234] Setting addon storage-provisioner=true in "addons-934361"
	I0422 16:58:29.624020   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.624249   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624305   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621368   19497 addons.go:69] Setting ingress=true in profile "addons-934361"
	I0422 16:58:29.624359   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624375   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624392   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.624395   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.624369   19497 addons.go:234] Setting addon ingress=true in "addons-934361"
	I0422 16:58:29.621848   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621946   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624517   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.621976   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624571   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.622134   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.624620   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623629   19497 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:58:29.624655   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.623661   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623785   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.623872   19497 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-934361"
	I0422 16:58:29.621367   19497 addons.go:69] Setting helm-tiller=true in profile "addons-934361"
	I0422 16:58:29.624893   19497 addons.go:234] Setting addon helm-tiller=true in "addons-934361"
	I0422 16:58:29.624933   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.625025   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.625042   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.625061   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.625258   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.625289   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.625361   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.625391   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.642294   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37195
	I0422 16:58:29.642879   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.642953   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33877
	I0422 16:58:29.643504   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.643521   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.643592   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.644034   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.644112   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.644130   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.644479   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.644511   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.645021   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.645535   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.645572   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.661968   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36307
	I0422 16:58:29.662504   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.663067   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.663100   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.663462   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.664068   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.664106   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.665271   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42287
	I0422 16:58:29.665726   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35647
	I0422 16:58:29.665899   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.666223   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.666530   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.666545   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.666888   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.667410   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44413
	I0422 16:58:29.667453   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.667474   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.667665   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.667681   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.668064   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.668579   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.668596   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.668646   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.668845   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.669564   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
	I0422 16:58:29.669933   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.670014   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.670456   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.670473   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.670825   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.670838   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.670867   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.671035   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.675533   19497 addons.go:234] Setting addon default-storageclass=true in "addons-934361"
	I0422 16:58:29.675575   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.675816   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.675837   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.675533   19497 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-934361"
	I0422 16:58:29.675914   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.676247   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.676289   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.687851   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34353
	I0422 16:58:29.688976   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 16:58:29.689138   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
	I0422 16:58:29.689583   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.689669   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.690224   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.690244   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.690650   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.691230   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.691271   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.691469   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0422 16:58:29.691618   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.691846   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.691861   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.692193   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.692348   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.692359   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.692406   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.692842   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.692858   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.692912   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.693307   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.693337   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.693831   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.693850   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.694477   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.694661   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.696620   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:29.697001   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.697020   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.697195   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34535
	I0422 16:58:29.697803   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.697879   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41005
	I0422 16:58:29.698216   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.698394   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.698405   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.698603   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.698620   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.698802   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.698889   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.699334   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.699364   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.699820   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.699845   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.705293   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0422 16:58:29.705854   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.706432   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.706452   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.706828   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.707412   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.707448   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.711744   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0422 16:58:29.712765   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.713257   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.713274   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.713652   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.713796   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.715846   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.718977   19497 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0422 16:58:29.717493   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35317
	I0422 16:58:29.717908   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36837
	I0422 16:58:29.720150   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42123
	I0422 16:58:29.721316   19497 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 16:58:29.721334   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0422 16:58:29.721354   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.721772   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.721868   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.722177   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.722560   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.722575   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.722599   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.722615   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.722989   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.723003   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.723285   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.723357   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.723394   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.723927   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.723968   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.724426   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41793
	I0422 16:58:29.724554   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.725085   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.725250   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.725757   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.725773   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.726123   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.726683   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.726723   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.726946   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32941
	I0422 16:58:29.727437   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.727530   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.727768   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.729407   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0422 16:58:29.728241   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.728274   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.728537   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.728870   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.729885   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41099
	I0422 16:58:29.730971   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0422 16:58:29.730982   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0422 16:58:29.730998   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.731044   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.731084   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.732851   19497 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0422 16:58:29.731504   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.731787   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.731871   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.733173   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0422 16:58:29.734432   19497 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0422 16:58:29.734465   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0422 16:58:29.734484   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.735191   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.735249   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.735307   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.735329   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.735346   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.735607   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.736006   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32897
	I0422 16:58:29.736166   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.736366   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.736389   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.736742   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.736771   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.736983   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.737352   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.737368   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.737408   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.739110   19497 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0422 16:58:29.737754   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.738066   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.738090   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.738650   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.739430   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.740100   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.740653   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.740727   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.740733   19497 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0422 16:58:29.740748   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.740748   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0422 16:58:29.740767   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.741466   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.743017   19497 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0422 16:58:29.741524   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.741543   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.741666   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.744050   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0422 16:58:29.744609   19497 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 16:58:29.744621   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0422 16:58:29.744638   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.745323   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.745339   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.745417   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.745597   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.745665   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.745909   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.746655   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.746674   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.747045   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.747067   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.747504   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:29.747543   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:29.747732   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.747753   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.747772   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.749417   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0422 16:58:29.747842   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.748799   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.749748   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.752768   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0422 16:58:29.754205   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0422 16:58:29.752123   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41731
	I0422 16:58:29.752152   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.752180   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.752258   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.754388   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.754520   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.754864   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.755979   19497 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 16:58:29.756177   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.759095   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0422 16:58:29.757493   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.757544   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.758135   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0422 16:58:29.758272   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.761979   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0422 16:58:29.760779   19497 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 16:58:29.760849   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.761005   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.761414   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.764599   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0422 16:58:29.763593   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 16:58:29.763735   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.763972   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.767137   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0422 16:58:29.765961   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.765989   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.766071   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.766575   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38457
	I0422 16:58:29.770547   19497 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0422 16:58:29.769375   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.769431   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.770970   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37121
	I0422 16:58:29.771011   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.771342   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37665
	I0422 16:58:29.771623   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36563
	I0422 16:58:29.772241   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0422 16:58:29.772254   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0422 16:58:29.772274   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.772546   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.774169   19497 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0422 16:58:29.772957   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.773230   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.773283   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.773308   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.773432   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.774114   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.774348   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.775680   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.775683   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.775735   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.777072   19497 out.go:177]   - Using image docker.io/busybox:stable
	I0422 16:58:29.775884   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.775969   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.776487   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.776488   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.776523   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.776584   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.776585   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.776834   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39581
	I0422 16:58:29.778440   19497 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 16:58:29.778589   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.779691   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.779727   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.779735   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:29.779819   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.779829   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.779842   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0422 16:58:29.779940   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.779929   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.780687   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.781436   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:29.780690   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.780698   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.780717   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.781468   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.781503   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.781812   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.781828   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.781986   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.782869   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0422 16:58:29.783699   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.783710   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I0422 16:58:29.784674   19497 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 16:58:29.783724   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.784688   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0422 16:58:29.784704   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.783833   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.783736   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.784733   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.785182   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.785204   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.785412   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:29.785417   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.787847   19497 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0422 16:58:29.786371   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:29.787299   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.788981   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.789205   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:29.789205   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 16:58:29.789245   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 16:58:29.789259   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.789564   19497 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 16:58:29.790917   19497 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0422 16:58:29.789580   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 16:58:29.789849   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:29.792375   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.792423   19497 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0422 16:58:29.792434   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792442   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0422 16:58:29.792458   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.790995   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.791199   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:29.791720   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792561   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.792563   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.792588   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.792619   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792623   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792705   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.792760   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.792803   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.793720   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.794037   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.794239   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.794257   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.794307   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.794465   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.794849   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.795498   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.795790   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.796286   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.796307   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.796461   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.796510   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.796589   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.796754   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.796901   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.796957   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.797082   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.797489   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.797588   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.799372   19497 out.go:177]   - Using image docker.io/registry:2.8.3
	I0422 16:58:29.797864   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.797896   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.798433   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:29.800889   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.802375   19497 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0422 16:58:29.801059   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	W0422 16:58:29.801819   19497 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48392->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.803875   19497 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0422 16:58:29.804040   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.805184   19497 retry.go:31] will retry after 301.589206ms: ssh: handshake failed: read tcp 192.168.39.1:48392->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.805192   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0422 16:58:29.805218   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:29.805220   19497 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0422 16:58:29.806494   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0422 16:58:29.806512   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0422 16:58:29.806532   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	W0422 16:58:29.806563   19497 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48398->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.806584   19497 retry.go:31] will retry after 328.556573ms: ssh: handshake failed: read tcp 192.168.39.1:48398->192.168.39.135:22: read: connection reset by peer
	I0422 16:58:29.809104   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809350   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809674   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.809689   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:29.809694   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809706   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:29.809770   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.809867   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:29.809912   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.810016   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.810055   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:29.810136   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:29.810242   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:29.810374   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:30.030883   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0422 16:58:30.030901   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0422 16:58:30.080374   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0422 16:58:30.104428   19497 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0422 16:58:30.104448   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0422 16:58:30.109205   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 16:58:30.111929   19497 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 16:58:30.112008   19497 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 16:58:30.152063   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 16:58:30.154670   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0422 16:58:30.154694   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0422 16:58:30.158940   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 16:58:30.161825   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 16:58:30.172599   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 16:58:30.222046   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 16:58:30.222066   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0422 16:58:30.235902   19497 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0422 16:58:30.235933   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0422 16:58:30.244913   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0422 16:58:30.244939   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0422 16:58:30.251006   19497 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0422 16:58:30.251029   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0422 16:58:30.260120   19497 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0422 16:58:30.260138   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0422 16:58:30.286380   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0422 16:58:30.286405   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0422 16:58:30.391851   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 16:58:30.391876   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 16:58:30.438070   19497 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0422 16:58:30.438093   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0422 16:58:30.458038   19497 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0422 16:58:30.458060   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0422 16:58:30.471625   19497 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0422 16:58:30.471654   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0422 16:58:30.504997   19497 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0422 16:58:30.505022   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0422 16:58:30.523323   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0422 16:58:30.523349   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0422 16:58:30.627429   19497 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 16:58:30.627462   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 16:58:30.640646   19497 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0422 16:58:30.640667   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0422 16:58:30.691388   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0422 16:58:30.709458   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0422 16:58:30.749416   19497 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0422 16:58:30.749442   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0422 16:58:30.755622   19497 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0422 16:58:30.755643   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0422 16:58:30.774957   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0422 16:58:30.774978   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0422 16:58:30.785661   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 16:58:30.833245   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 16:58:30.849630   19497 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 16:58:30.849656   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0422 16:58:31.029060   19497 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0422 16:58:31.029084   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0422 16:58:31.071093   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0422 16:58:31.071114   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0422 16:58:31.081833   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0422 16:58:31.081855   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0422 16:58:31.135074   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0422 16:58:31.299436   19497 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0422 16:58:31.299471   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0422 16:58:31.364017   19497 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0422 16:58:31.364044   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0422 16:58:31.525084   19497 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:31.525120   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0422 16:58:31.576828   19497 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 16:58:31.576857   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0422 16:58:31.637259   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0422 16:58:31.637285   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0422 16:58:31.717188   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:31.760673   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0422 16:58:31.760701   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0422 16:58:32.014043   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 16:58:32.111510   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0422 16:58:32.111550   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0422 16:58:32.628933   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0422 16:58:32.628967   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0422 16:58:32.801705   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.721300301s)
	I0422 16:58:32.801752   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:32.801763   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:32.802115   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:32.802137   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:32.802164   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:32.802183   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:32.802195   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:32.802420   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:32.802468   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:32.815188   19497 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 16:58:32.815214   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0422 16:58:33.034108   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 16:58:33.435806   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.326567229s)
	I0422 16:58:33.435856   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:33.435871   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:33.435877   19497 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.323926134s)
	I0422 16:58:33.435853   19497 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.323801126s)
	I0422 16:58:33.435931   19497 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 16:58:33.436247   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:33.436285   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:33.436296   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:33.436317   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:33.436325   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:33.437383   19497 node_ready.go:35] waiting up to 6m0s for node "addons-934361" to be "Ready" ...
	I0422 16:58:33.437523   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:33.437572   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:33.437545   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:33.447475   19497 node_ready.go:49] node "addons-934361" has status "Ready":"True"
	I0422 16:58:33.447499   19497 node_ready.go:38] duration metric: took 10.089638ms for node "addons-934361" to be "Ready" ...
	I0422 16:58:33.447507   19497 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 16:58:33.486659   19497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:33.960145   19497 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-934361" context rescaled to 1 replicas
	I0422 16:58:34.516937   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.364837669s)
	I0422 16:58:34.516992   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517006   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.516948   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.357978105s)
	I0422 16:58:34.517114   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517132   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.517275   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517358   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:34.517375   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517386   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.517338   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:34.517449   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517457   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:34.517462   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:34.517507   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:34.517516   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:34.517740   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517758   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:34.517807   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:34.517911   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:34.517931   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:35.710783   19497 pod_ready.go:102] pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:36.740759   19497 pod_ready.go:92] pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:36.740785   19497 pod_ready.go:81] duration metric: took 3.254095904s for pod "coredns-7db6d8ff4d-9kl4l" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:36.740798   19497 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:36.846184   19497 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0422 16:58:36.846227   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:36.849903   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:36.850345   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:36.850377   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:36.850519   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:36.850727   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:36.850913   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:36.851114   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:37.451630   19497 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0422 16:58:37.708620   19497 addons.go:234] Setting addon gcp-auth=true in "addons-934361"
	I0422 16:58:37.708691   19497 host.go:66] Checking if "addons-934361" exists ...
	I0422 16:58:37.709134   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:37.709175   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:37.724325   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I0422 16:58:37.724808   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:37.725323   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:37.725347   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:37.725813   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:37.726462   19497 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 16:58:37.726501   19497 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 16:58:37.742600   19497 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I0422 16:58:37.743058   19497 main.go:141] libmachine: () Calling .GetVersion
	I0422 16:58:37.743582   19497 main.go:141] libmachine: Using API Version  1
	I0422 16:58:37.743606   19497 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 16:58:37.743941   19497 main.go:141] libmachine: () Calling .GetMachineName
	I0422 16:58:37.744231   19497 main.go:141] libmachine: (addons-934361) Calling .GetState
	I0422 16:58:37.746082   19497 main.go:141] libmachine: (addons-934361) Calling .DriverName
	I0422 16:58:37.746339   19497 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0422 16:58:37.746363   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHHostname
	I0422 16:58:37.749201   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:37.749615   19497 main.go:141] libmachine: (addons-934361) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:5f:36", ip: ""} in network mk-addons-934361: {Iface:virbr1 ExpiryTime:2024-04-22 17:57:48 +0000 UTC Type:0 Mac:52:54:00:34:5f:36 Iaid: IPaddr:192.168.39.135 Prefix:24 Hostname:addons-934361 Clientid:01:52:54:00:34:5f:36}
	I0422 16:58:37.749646   19497 main.go:141] libmachine: (addons-934361) DBG | domain addons-934361 has defined IP address 192.168.39.135 and MAC address 52:54:00:34:5f:36 in network mk-addons-934361
	I0422 16:58:37.749763   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHPort
	I0422 16:58:37.749961   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHKeyPath
	I0422 16:58:37.750117   19497 main.go:141] libmachine: (addons-934361) Calling .GetSSHUsername
	I0422 16:58:37.750248   19497 sshutil.go:53] new ssh client: &{IP:192.168.39.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/addons-934361/id_rsa Username:docker}
	I0422 16:58:37.752692   19497 pod_ready.go:97] pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:37 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.135 HostIPs:[{IP:192.168.39.135}] PodIP: PodIPs:[]
StartTime:2024-04-22 16:58:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-04-22 16:58:35 +0000 UTC,FinishedAt:2024-04-22 16:58:35 +0000 UTC,ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f Started:0xc00216768c AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0422 16:58:37.752726   19497 pod_ready.go:81] duration metric: took 1.011920899s for pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace to be "Ready" ...
	E0422 16:58:37.752741   19497 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-vxk4x" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:37 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason:PodFailed Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-04-22 16:58:29 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.135 HostIPs:[{IP:192.1
68.39.135}] PodIP: PodIPs:[] StartTime:2024-04-22 16:58:29 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2024-04-22 16:58:35 +0000 UTC,FinishedAt:2024-04-22 16:58:35 +0000 UTC,ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://912fc6ef543e8c56d494236b1bd09396b1e83f906e3be100f78bb8d5eed08d0f Started:0xc00216768c AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0422 16:58:37.752751   19497 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.761328   19497 pod_ready.go:92] pod "etcd-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.761361   19497 pod_ready.go:81] duration metric: took 8.597045ms for pod "etcd-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.761377   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.767965   19497 pod_ready.go:92] pod "kube-apiserver-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.767986   19497 pod_ready.go:81] duration metric: took 6.600972ms for pod "kube-apiserver-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.767995   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.778550   19497 pod_ready.go:92] pod "kube-controller-manager-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.778573   19497 pod_ready.go:81] duration metric: took 10.572069ms for pod "kube-controller-manager-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.778586   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbd87" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.785325   19497 pod_ready.go:92] pod "kube-proxy-zbd87" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.785348   19497 pod_ready.go:81] duration metric: took 6.756331ms for pod "kube-proxy-zbd87" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.785358   19497 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:38.147267   19497 pod_ready.go:92] pod "kube-scheduler-addons-934361" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:38.147293   19497 pod_ready.go:81] duration metric: took 361.929087ms for pod "kube-scheduler-addons-934361" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:38.147303   19497 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:39.006061   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.844206343s)
	I0422 16:58:39.006120   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006118   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.833488304s)
	I0422 16:58:39.006133   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006158   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006179   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006205   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.314757591s)
	I0422 16:58:39.006231   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.296734805s)
	I0422 16:58:39.006238   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006251   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006249   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006262   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006382   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.173106073s)
	I0422 16:58:39.006406   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006415   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006490   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.871380694s)
	I0422 16:58:39.006515   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006525   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006637   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006680   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006669   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006688   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006694   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006696   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006703   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006705   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006714   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006845   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006872   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006893   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006899   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006907   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006914   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.006956   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.006975   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.006981   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.006989   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.006996   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.007204   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.007228   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.007234   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.007243   19497 addons.go:470] Verifying addon metrics-server=true in "addons-934361"
	I0422 16:58:39.007283   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.007292   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.007300   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.007306   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.007351   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.007357   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.007364   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.007371   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.008721   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.008749   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.008755   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.008764   19497 addons.go:470] Verifying addon ingress=true in "addons-934361"
	I0422 16:58:39.010700   19497 out.go:177] * Verifying ingress addon...
	I0422 16:58:39.008894   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.008923   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.008954   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.008969   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.008985   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.009028   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.009042   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.22060972s)
	I0422 16:58:39.009051   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.009072   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.010750   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.010791   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.012442   19497 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-934361 service yakd-dashboard -n yakd-dashboard
	
	I0422 16:58:39.010806   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.010813   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.010824   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.012924   19497 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0422 16:58:39.014041   19497 addons.go:470] Verifying addon registry=true in "addons-934361"
	I0422 16:58:39.014090   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.015645   19497 out.go:177] * Verifying registry addon...
	I0422 16:58:39.014351   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.015680   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.015697   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.014375   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.015713   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.016035   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.016055   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.017270   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.017840   19497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0422 16:58:39.027808   19497 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0422 16:58:39.027832   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:39.037067   19497 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0422 16:58:39.037088   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:39.065421   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.065438   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.065889   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.065927   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.065953   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	W0422 16:58:39.066046   19497 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0422 16:58:39.081038   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:39.081061   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:39.081400   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:39.081414   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:39.081429   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:39.640868   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:39.641565   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.031701   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.038372   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:40.068672   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.054584148s)
	I0422 16:58:40.068729   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.068742   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.068813   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.351558932s)
	W0422 16:58:40.068877   19497 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 16:58:40.068908   19497 retry.go:31] will retry after 200.447904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 16:58:40.069082   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.069100   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.069116   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.069118   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.069128   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.069395   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.069429   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.069436   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.205606   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:40.269748   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:40.549711   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.557703   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:40.815326   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.781169496s)
	I0422 16:58:40.815365   19497 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.069010233s)
	I0422 16:58:40.815373   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.815385   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.817182   19497 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:40.815707   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.815749   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.818594   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.818607   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:40.818616   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:40.820477   19497 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0422 16:58:40.818881   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:40.818884   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:40.822519   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:40.822532   19497 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-934361"
	I0422 16:58:40.822560   19497 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0422 16:58:40.822576   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0422 16:58:40.824418   19497 out.go:177] * Verifying csi-hostpath-driver addon...
	I0422 16:58:40.826307   19497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0422 16:58:40.851943   19497 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0422 16:58:40.851966   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:41.000187   19497 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0422 16:58:41.000209   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0422 16:58:41.018436   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:41.024696   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:41.099173   19497 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 16:58:41.099198   19497 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0422 16:58:41.178966   19497 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 16:58:41.331872   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:41.518380   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:41.547479   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:41.836955   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:42.019270   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:42.023377   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:42.338024   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:42.522155   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:42.528992   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:42.655971   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:42.832670   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.018695   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:43.023084   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:43.107299   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.83749728s)
	I0422 16:58:43.107345   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.107360   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.107670   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.107731   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.107745   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.107754   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.107687   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:43.107993   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.108012   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.108023   19497 main.go:141] libmachine: (addons-934361) DBG | Closing plugin on server side
	I0422 16:58:43.347000   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.444263   19497 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.265255283s)
	I0422 16:58:43.444325   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.444342   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.444698   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.444742   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.444755   19497 main.go:141] libmachine: Making call to close driver server
	I0422 16:58:43.444764   19497 main.go:141] libmachine: (addons-934361) Calling .Close
	I0422 16:58:43.444988   19497 main.go:141] libmachine: Successfully made call to close driver server
	I0422 16:58:43.445012   19497 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 16:58:43.447111   19497 addons.go:470] Verifying addon gcp-auth=true in "addons-934361"
	I0422 16:58:43.448925   19497 out.go:177] * Verifying gcp-auth addon...
	I0422 16:58:43.450935   19497 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0422 16:58:43.483068   19497 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0422 16:58:43.483089   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:43.525773   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:43.526704   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:43.832651   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.955657   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:44.019263   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:44.022929   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:44.333026   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:44.455010   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:44.519769   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:44.522468   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:44.656170   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:44.832954   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:44.954724   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:45.018677   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:45.022819   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:45.333106   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:45.454153   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:45.521837   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:45.528196   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:45.832605   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:45.955231   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:46.020512   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:46.023459   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:46.331515   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:46.455366   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:46.521604   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:46.525546   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:46.667781   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:46.937303   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:46.955241   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:47.020261   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:47.024125   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:47.332558   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:47.455208   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:47.519659   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:47.522186   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:47.832715   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:47.955946   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:48.021590   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:48.027599   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:48.335144   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:48.455451   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:48.520352   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:48.527448   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:48.837470   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:48.954347   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:49.018433   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:49.022333   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:49.153774   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:49.332701   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:49.454701   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:49.519232   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:49.522336   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:49.832483   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:49.955422   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:50.018918   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:50.021828   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:50.332711   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:50.455112   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:50.520253   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:50.523221   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:50.832226   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:50.954877   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:51.019323   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:51.021996   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:51.153813   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:51.332486   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:51.455588   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:51.519163   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:51.522231   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:51.838073   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:51.955018   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:52.021372   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:52.023681   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:52.333391   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:52.455290   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:52.519950   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:52.526301   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:52.832636   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:52.955324   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:53.019394   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:53.022130   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:53.154480   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:53.332285   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:53.455565   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:53.519248   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:53.522072   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:53.833019   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:53.954557   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:54.018700   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:54.022538   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:54.332099   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:54.453918   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:54.519334   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:54.522244   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:54.833090   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:54.954870   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:55.019029   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:55.022643   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:55.156910   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:55.332593   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:55.455106   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:55.519629   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:55.527103   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:55.831686   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.144651   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:56.147604   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:56.147695   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:56.333178   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.455272   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:56.518870   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:56.523177   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:56.832473   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.954883   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:57.018948   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:57.022803   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:57.332556   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:57.456282   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:57.518129   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:57.521904   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:57.653734   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:57.836398   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:57.955576   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:58.018770   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:58.021652   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:58.332748   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:58.454935   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:58.520423   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:58.523168   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:58.831811   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:58.958475   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:59.019134   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:59.025129   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:59.336018   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:59.455492   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:59.518376   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:59.522497   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:59.654153   19497 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:59.835546   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:59.955284   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:00.018503   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:00.022328   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:00.332359   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:00.456069   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:00.521387   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:00.523989   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:00.653894   19497 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace has status "Ready":"True"
	I0422 16:59:00.653921   19497 pod_ready.go:81] duration metric: took 22.506610483s for pod "nvidia-device-plugin-daemonset-ht2fz" in "kube-system" namespace to be "Ready" ...
	I0422 16:59:00.653930   19497 pod_ready.go:38] duration metric: took 27.206413168s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 16:59:00.653948   19497 api_server.go:52] waiting for apiserver process to appear ...
	I0422 16:59:00.654014   19497 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 16:59:00.673151   19497 api_server.go:72] duration metric: took 31.054671465s to wait for apiserver process to appear ...
	I0422 16:59:00.673179   19497 api_server.go:88] waiting for apiserver healthz status ...
	I0422 16:59:00.673198   19497 api_server.go:253] Checking apiserver healthz at https://192.168.39.135:8443/healthz ...
	I0422 16:59:00.678202   19497 api_server.go:279] https://192.168.39.135:8443/healthz returned 200:
	ok
	I0422 16:59:00.680182   19497 api_server.go:141] control plane version: v1.30.0
	I0422 16:59:00.680209   19497 api_server.go:131] duration metric: took 7.023803ms to wait for apiserver health ...
	I0422 16:59:00.680217   19497 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 16:59:00.695312   19497 system_pods.go:59] 18 kube-system pods found
	I0422 16:59:00.695343   19497 system_pods.go:61] "coredns-7db6d8ff4d-9kl4l" [46deec4f-c97e-48aa-b1ca-9c679e0a64e2] Running
	I0422 16:59:00.695356   19497 system_pods.go:61] "csi-hostpath-attacher-0" [d74d70fb-d561-4814-8fe7-4ff8c0a23bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0422 16:59:00.695362   19497 system_pods.go:61] "csi-hostpath-resizer-0" [9b290af7-b399-4289-82ab-afc3b871ed37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0422 16:59:00.695369   19497 system_pods.go:61] "csi-hostpathplugin-zjt6m" [31721d0b-bd0c-4744-bad2-98ec78059355] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0422 16:59:00.695375   19497 system_pods.go:61] "etcd-addons-934361" [c2ae446c-1bbb-455a-a0fb-f17ec9c211dd] Running
	I0422 16:59:00.695381   19497 system_pods.go:61] "kube-apiserver-addons-934361" [b19e33d4-127e-4da6-808f-32eb6d5a3d90] Running
	I0422 16:59:00.695386   19497 system_pods.go:61] "kube-controller-manager-addons-934361" [6163c15c-68c4-4c0a-93ec-970325ddd8ce] Running
	I0422 16:59:00.695392   19497 system_pods.go:61] "kube-ingress-dns-minikube" [0a75b318-14a2-4ad7-805f-363d1863bbdb] Running
	I0422 16:59:00.695399   19497 system_pods.go:61] "kube-proxy-zbd87" [b08b8c4d-9f59-4f64-8503-e5d055487f74] Running
	I0422 16:59:00.695408   19497 system_pods.go:61] "kube-scheduler-addons-934361" [961651f6-0a94-4bc5-883d-63e42ce76c03] Running
	I0422 16:59:00.695414   19497 system_pods.go:61] "metrics-server-c59844bb4-9rwbq" [be72f5e4-ae81-48d6-b57f-d9640e75904a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 16:59:00.695420   19497 system_pods.go:61] "nvidia-device-plugin-daemonset-ht2fz" [9f97974a-3d52-4db6-9187-920d1c7c72f3] Running
	I0422 16:59:00.695427   19497 system_pods.go:61] "registry-proxy-nzg6s" [033a658d-3f50-4962-ac56-dcf30ac650c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0422 16:59:00.695435   19497 system_pods.go:61] "registry-srp9r" [b6334572-9ae2-4f63-8d71-d5ec2df78324] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 16:59:00.695445   19497 system_pods.go:61] "snapshot-controller-745499f584-hlhfk" [b43966fb-693d-4dda-b93e-dcfdeb860226] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.695453   19497 system_pods.go:61] "snapshot-controller-745499f584-p498f" [87a1cbcf-3ba9-42bf-a292-39bc33617c0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.695460   19497 system_pods.go:61] "storage-provisioner" [eddb4fb4-7de5-44ef-9bac-3930ce87160c] Running
	I0422 16:59:00.695465   19497 system_pods.go:61] "tiller-deploy-6677d64bcd-fp7n8" [8ca5bebc-4067-46c4-b889-2eae5e85437d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0422 16:59:00.695475   19497 system_pods.go:74] duration metric: took 15.251388ms to wait for pod list to return data ...
	I0422 16:59:00.695489   19497 default_sa.go:34] waiting for default service account to be created ...
	I0422 16:59:00.698527   19497 default_sa.go:45] found service account: "default"
	I0422 16:59:00.698549   19497 default_sa.go:55] duration metric: took 3.049747ms for default service account to be created ...
	I0422 16:59:00.698557   19497 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 16:59:00.710668   19497 system_pods.go:86] 18 kube-system pods found
	I0422 16:59:00.710697   19497 system_pods.go:89] "coredns-7db6d8ff4d-9kl4l" [46deec4f-c97e-48aa-b1ca-9c679e0a64e2] Running
	I0422 16:59:00.710705   19497 system_pods.go:89] "csi-hostpath-attacher-0" [d74d70fb-d561-4814-8fe7-4ff8c0a23bd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0422 16:59:00.710711   19497 system_pods.go:89] "csi-hostpath-resizer-0" [9b290af7-b399-4289-82ab-afc3b871ed37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0422 16:59:00.710720   19497 system_pods.go:89] "csi-hostpathplugin-zjt6m" [31721d0b-bd0c-4744-bad2-98ec78059355] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0422 16:59:00.710724   19497 system_pods.go:89] "etcd-addons-934361" [c2ae446c-1bbb-455a-a0fb-f17ec9c211dd] Running
	I0422 16:59:00.710729   19497 system_pods.go:89] "kube-apiserver-addons-934361" [b19e33d4-127e-4da6-808f-32eb6d5a3d90] Running
	I0422 16:59:00.710735   19497 system_pods.go:89] "kube-controller-manager-addons-934361" [6163c15c-68c4-4c0a-93ec-970325ddd8ce] Running
	I0422 16:59:00.710741   19497 system_pods.go:89] "kube-ingress-dns-minikube" [0a75b318-14a2-4ad7-805f-363d1863bbdb] Running
	I0422 16:59:00.710746   19497 system_pods.go:89] "kube-proxy-zbd87" [b08b8c4d-9f59-4f64-8503-e5d055487f74] Running
	I0422 16:59:00.710754   19497 system_pods.go:89] "kube-scheduler-addons-934361" [961651f6-0a94-4bc5-883d-63e42ce76c03] Running
	I0422 16:59:00.710764   19497 system_pods.go:89] "metrics-server-c59844bb4-9rwbq" [be72f5e4-ae81-48d6-b57f-d9640e75904a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 16:59:00.710780   19497 system_pods.go:89] "nvidia-device-plugin-daemonset-ht2fz" [9f97974a-3d52-4db6-9187-920d1c7c72f3] Running
	I0422 16:59:00.710789   19497 system_pods.go:89] "registry-proxy-nzg6s" [033a658d-3f50-4962-ac56-dcf30ac650c7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0422 16:59:00.710799   19497 system_pods.go:89] "registry-srp9r" [b6334572-9ae2-4f63-8d71-d5ec2df78324] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 16:59:00.710807   19497 system_pods.go:89] "snapshot-controller-745499f584-hlhfk" [b43966fb-693d-4dda-b93e-dcfdeb860226] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.710816   19497 system_pods.go:89] "snapshot-controller-745499f584-p498f" [87a1cbcf-3ba9-42bf-a292-39bc33617c0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:59:00.710821   19497 system_pods.go:89] "storage-provisioner" [eddb4fb4-7de5-44ef-9bac-3930ce87160c] Running
	I0422 16:59:00.710828   19497 system_pods.go:89] "tiller-deploy-6677d64bcd-fp7n8" [8ca5bebc-4067-46c4-b889-2eae5e85437d] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0422 16:59:00.710837   19497 system_pods.go:126] duration metric: took 12.275729ms to wait for k8s-apps to be running ...
	I0422 16:59:00.710849   19497 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 16:59:00.710905   19497 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 16:59:00.727079   19497 system_svc.go:56] duration metric: took 16.221619ms WaitForService to wait for kubelet
	I0422 16:59:00.727111   19497 kubeadm.go:576] duration metric: took 31.108635829s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 16:59:00.727139   19497 node_conditions.go:102] verifying NodePressure condition ...
	I0422 16:59:00.730323   19497 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 16:59:00.730346   19497 node_conditions.go:123] node cpu capacity is 2
	I0422 16:59:00.730364   19497 node_conditions.go:105] duration metric: took 3.220639ms to run NodePressure ...
	I0422 16:59:00.730375   19497 start.go:240] waiting for startup goroutines ...
	I0422 16:59:00.832333   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:00.955365   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:01.018477   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:01.023113   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:01.332525   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:01.456461   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:01.519454   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:01.522465   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:01.832737   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:01.954518   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:02.019060   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:02.024010   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:02.332172   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:02.454958   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:02.519479   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:02.522643   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:02.832556   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:02.955331   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:03.018583   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:03.021650   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:03.352164   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:03.455870   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:03.518922   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:03.522617   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:03.836415   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:03.954931   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:04.018897   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:04.022915   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:04.331955   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:04.455108   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:04.518964   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:04.523760   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:04.832752   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:04.954672   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:05.018135   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:05.021706   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:05.332490   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:05.455527   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:05.519611   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:05.522007   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:05.832502   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:05.956074   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:06.019102   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:06.022704   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:06.333197   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:06.459316   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:06.520339   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:06.522470   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:06.832593   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:06.954773   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:07.018955   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:07.022725   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:07.334410   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:07.455636   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:07.519219   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:07.521461   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:07.832200   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:07.955158   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:08.019558   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:08.022464   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:08.332661   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:08.455805   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:08.519939   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:08.523282   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:08.832773   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:08.955149   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:09.019474   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:09.022836   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:09.335032   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:09.467034   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:09.523110   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:09.532325   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:09.833045   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:09.955145   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:10.020821   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:10.023999   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:10.443643   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:10.455860   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:10.519036   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:10.524123   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:10.837655   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:10.955349   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:11.019251   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:11.022427   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:11.333272   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:11.454776   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:11.518931   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:11.521744   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:11.834920   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:11.954901   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:12.019408   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:12.021769   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:12.332004   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:12.463261   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:12.519574   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:12.522601   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:12.833736   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:12.955148   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:13.019045   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:13.021817   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:13.333808   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:13.454975   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:13.524601   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:13.529288   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:13.832380   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:13.955608   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:14.023102   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:14.025272   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:14.336341   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:14.455159   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:14.519407   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:14.522395   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:14.831687   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:14.955172   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:15.019066   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:15.021903   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:15.335790   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:15.942968   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:15.943660   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:15.944051   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:15.944081   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:15.954884   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:16.018649   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:16.021430   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:16.332692   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:16.454201   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:16.519543   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:16.522306   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:16.832769   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:16.954370   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:17.024055   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:17.025825   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:17.332912   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:17.455168   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:17.519492   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:17.521811   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:17.831983   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:17.954627   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:18.018576   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:18.022898   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:18.333655   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:18.456877   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:18.519404   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:18.522065   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:18.835358   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:18.955448   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:19.019659   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:19.022604   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:19.331906   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:19.455041   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:19.519380   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:19.522159   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:19.834012   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:19.954922   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:20.021025   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:20.023034   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:20.333268   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:20.456596   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:20.521493   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:20.525592   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:20.832298   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:20.956730   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:21.018675   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:21.021811   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:21.331863   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:21.454327   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:21.518476   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:21.522840   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:21.842315   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:21.958638   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:22.018223   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:22.023322   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:22.335989   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:22.457733   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:22.518767   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:22.522392   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:22.832400   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:22.955377   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:23.020538   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:23.022586   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:23.331360   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:23.455465   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:23.519464   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:23.526589   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:23.832687   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:23.955399   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:24.018181   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:24.022394   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:24.333134   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:24.456556   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:24.640271   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:24.644793   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:24.833279   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:24.954730   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:25.020020   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:25.022327   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:25.334455   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:25.455363   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:25.518498   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:25.522276   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:25.833180   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:25.955332   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:26.019331   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:26.022381   19497 kapi.go:107] duration metric: took 47.004537115s to wait for kubernetes.io/minikube-addons=registry ...
	I0422 16:59:26.336297   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:26.455209   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:26.519099   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:26.832544   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:26.955182   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:27.019568   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:27.333360   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:27.454763   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:27.518965   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:27.832338   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:27.957013   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:28.019607   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:28.331402   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:28.455284   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:28.518688   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:28.833020   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:28.955935   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:29.019032   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:29.331711   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:29.454860   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:29.518849   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:29.832633   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:29.955809   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:30.018531   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:30.332727   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:30.455138   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:30.519688   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:30.833190   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:30.954628   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:31.018744   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:31.332365   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:31.454948   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:31.518692   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:31.830920   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:31.955196   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:32.019081   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:32.332030   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:32.454820   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:32.519258   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:32.832667   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:32.954293   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:33.019550   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:33.331525   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:33.455392   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:33.519561   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:33.832290   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:33.955282   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:34.020184   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:34.332619   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:34.455151   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:34.519517   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:34.832511   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:34.955941   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:35.019281   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:35.331777   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:35.454195   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:35.520806   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:35.833038   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:35.955174   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:36.019103   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:36.332648   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:36.455515   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:36.519544   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:36.835933   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:36.955439   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:37.018698   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:37.331551   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:37.455224   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:37.519523   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:37.832875   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:37.955777   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:38.019656   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:38.332159   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:38.455418   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:38.518392   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:38.838018   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:38.954540   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:39.018854   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:39.332139   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:39.454743   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:39.518668   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:39.831896   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:39.957884   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:40.018523   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:40.332949   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:40.454696   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:40.518605   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:40.850671   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:40.954194   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:41.018918   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:41.332240   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:41.455611   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:41.518979   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:41.832121   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:41.957297   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:42.018914   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:42.333712   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:42.455656   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:42.519107   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:42.836296   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:42.955383   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:43.019367   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:43.331257   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:43.454662   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:43.518675   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:43.832634   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:43.955410   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:44.025010   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:44.332594   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:44.455499   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:44.518502   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:44.833757   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:44.954682   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:45.019144   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:45.332120   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:45.455160   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:45.519114   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:45.832355   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:45.954569   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:46.020014   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:46.332265   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:46.456304   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:46.518698   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:46.831266   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:46.956170   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:47.019203   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:47.331886   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:47.454618   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:47.518415   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:47.832319   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:47.954993   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:48.020497   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:48.344170   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:48.454790   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:48.521902   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:48.837667   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:48.956315   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:49.018825   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:49.331206   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:49.455886   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:49.518726   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:49.831625   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:49.955430   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:50.018472   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:50.332230   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:50.455255   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:50.519549   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:50.833028   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:50.954359   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:51.018462   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:51.332314   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:51.455382   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:51.518510   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:51.832972   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:51.954863   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:52.018807   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:52.335063   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:52.454555   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:52.518980   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:52.831213   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:52.955095   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:53.019167   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:53.332215   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:53.454932   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:53.519290   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:53.835498   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:53.955646   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:54.018560   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:54.332934   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:54.454971   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:54.519114   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:54.834845   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:54.957216   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:55.019195   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:55.332013   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:55.454628   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:55.518675   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:55.832940   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:55.955051   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:56.018682   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:56.333290   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:56.455051   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:56.519438   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:56.832320   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:56.957714   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:57.028329   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:57.333234   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:57.454766   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:57.519145   19497 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:57.832328   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:57.955455   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:58.018726   19497 kapi.go:107] duration metric: took 1m19.005798441s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0422 16:59:58.332486   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:58.455354   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:58.833423   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:58.954836   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:59.332687   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:59.455673   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:59.832467   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:59.955674   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 17:00:00.331758   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:00.455761   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 17:00:00.832703   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:00.954568   19497 kapi.go:107] duration metric: took 1m17.503631425s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0422 17:00:00.956644   19497 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-934361 cluster.
	I0422 17:00:00.958231   19497 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0422 17:00:00.959822   19497 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0422 17:00:01.332022   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:01.833334   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:02.332440   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:02.832672   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:03.333087   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:03.833008   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:04.333993   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:04.833559   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:05.333628   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:05.832645   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:06.332476   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:06.835867   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:07.337346   19497 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 17:00:07.833043   19497 kapi.go:107] duration metric: took 1m27.0067329s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0422 17:00:07.835095   19497 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, helm-tiller, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0422 17:00:07.836547   19497 addons.go:505] duration metric: took 1m38.218045148s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner metrics-server yakd helm-tiller storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0422 17:00:07.836591   19497 start.go:245] waiting for cluster config update ...
	I0422 17:00:07.836625   19497 start.go:254] writing updated cluster config ...
	I0422 17:00:07.836871   19497 ssh_runner.go:195] Run: rm -f paused
	I0422 17:00:07.888626   19497 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 17:00:07.890795   19497 out.go:177] * Done! kubectl is now configured to use "addons-934361" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.014735112Z" level=debug msg="Container or sandbox exited: 0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a.0P6MM2" file="server/server.go:810"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.014763549Z" level=debug msg="Event: CREATE        \"/var/run/crio/exits/0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a\"" file="server/server.go:805"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.014781696Z" level=debug msg="Container or sandbox exited: 0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a" file="server/server.go:810"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.014799594Z" level=debug msg="container exited and found: 0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a" file="server/server.go:825"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.014915658Z" level=debug msg="Event: RENAME        \"/var/run/crio/exits/0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a.0P6MM2\"" file="server/server.go:805"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.014920718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b9f69e6-ce08-49ab-af6c-217bb7d575b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.015450922Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b9f69e6-ce08-49ab-af6c-217bb7d575b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.015935741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9df277752128a44f9e298d1a2296a2bce4750d3a1c337bf3938708926a65cb56,PodSandboxId:2c3dc4771468f9728a01b77e14ad419ea8b99f83f75c25c4b3b7f4f2d3bb47f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713805369539163809,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-zdkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f7d61ea-53c0-4922-9b21-e8daf0c21bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 2657a6f6,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9926944e9d15780564b120bb3d82fdc6172add917ca19b3a512b5e95405bdc5d,PodSandboxId:bb751305424892411c6ebd876b38752fd70b8c5c74a8b14c493973ed31688055,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713805318561774948,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-jx57l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 05d7c185-fcb3-4db1-941f-58c4cf86a75f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 4680ca88,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f302e7c44f2215a4eb4dc05c17696313ac15c01877ceeb4d34d0b451097e36ae,PodSandboxId:9b6c83eadeecd540eb3d823a28df9c9191d19e1349bb76d7ee640f5ace8fd487,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713805229544862244,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 54f74d8d-0de6-4905-8880-cdc716c944b3,},Annotations:map[string]string{io.kubernetes.container.hash: 7742d2c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac,PodSandboxId:e7dc89ac1feef0671d5d95fff08c7b88cf4313ffb379e61507b3fc92b599af4c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713805200007867342,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-hb6nw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 616d4a54-5dfb-45cc-9b0a-a2461bbdc3e8,},Annotations:map[string]string{io.kubernetes.container.hash: ff242c65,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992ddaec1770dd27b6fcf280f55f128597b66fded6028ed19bd06581c6a7af4,PodSandboxId:f4386e68702ef1455c642103ff9814ba90f77b0f7ef5e4021e8659af4c45530f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171380
5176582586683,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-dqx5m,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9,},Annotations:map[string]string{io.kubernetes.container.hash: ecfe4f92,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a,PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713805167902556409,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-9rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be72f5e4-ae81-48d6-b57f-d9640e75904a,},Annotations:map[string]string{io.kubernetes.container.hash: dde1c527,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0,PodSandboxId:9a83f28ec25a17677c9a3dbe8eaf24730d2f8678d9e0d72053a42122b232eb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805116439144899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddb4fb4-7de5-44ef-9bac-3930ce87160c,},Annotations:map[string]string{io.kubernetes.container.hash: a017758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839,PodSandboxId:c16f432ead2fcea0e9e3dc709ce1f66044a91451e425e92bb0e3b50e8b8fd5d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805114646949234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9kl4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46deec4f-c97e-48aa-b1ca-9c679e0a64e2,},Annotations:map[string]string{io.kubernetes.container.hash: 85712065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160,PodSandb
oxId:6a5e4353dde8f7e0fadf7a0693d9c624a94121396d4daa820ffb5ed996ef7e32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805111283702745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbd87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08b8c4d-9f59-4f64-8503-e5d055487f74,},Annotations:map[string]string{io.kubernetes.container.hash: 4d716e8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d,PodSandboxId:aa58a6a1d4ea9b4ae0775732d914
9c31b5d9f97c57149b082c6a5ba21fd7d06a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805090322229442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d903f02e20fa6303480e5550d5ff53c6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09,PodSandboxId:ed579bd1a44698e8db
b907cd5e2b51bbcc0b49cf0b9581ea5a0502d71b9a3462,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713805090294253425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a435bd17231af3fb00e256e0ac8b418,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e,PodSandboxId:9c1684f3b30a58043ea007f2cf40ec39dc7
abe196b588d110a0be598ded10ee9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713805090237410245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70b88177cb739840d9af5145b71cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 48ba8371,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0,PodSandboxId:37d43ed8797170034a3fe41c1f1b7b1de3a9600a846a92f67d2a
7cfd4d831e11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713805090204249501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c66e348c58730d7efb8ebd6834f7506,},Annotations:map[string]string{io.kubernetes.container.hash: 68bbc046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b9f69e6-ce08-49ab-af6c-217bb7d575b1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.053459117Z" level=debug msg="Unmounted container 0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a" file="storage/runtime.go:495" id=5c655c6f-0d7e-4327-b0b6-af2a8012d741 name=/runtime.v1.RuntimeService/StopContainer
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.069423477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=909d5703-79e7-4e79-a64e-5ae05abc7bf0 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.069494716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=909d5703-79e7-4e79-a64e-5ae05abc7bf0 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.070966902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0160549b-e85e-4f11-9bb6-aa3c7ca8dbb8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.072467824Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713805504072441070,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:579877,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0160549b-e85e-4f11-9bb6-aa3c7ca8dbb8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.073072113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43b64c86-bb5e-4ce4-8af1-a2c188ce551a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.073179583Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43b64c86-bb5e-4ce4-8af1-a2c188ce551a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.073497481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9df277752128a44f9e298d1a2296a2bce4750d3a1c337bf3938708926a65cb56,PodSandboxId:2c3dc4771468f9728a01b77e14ad419ea8b99f83f75c25c4b3b7f4f2d3bb47f0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713805369539163809,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-zdkzg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f7d61ea-53c0-4922-9b21-e8daf0c21bd7,},Annotations:map[string]string{io.kubernetes.container.hash: 2657a6f6,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9926944e9d15780564b120bb3d82fdc6172add917ca19b3a512b5e95405bdc5d,PodSandboxId:bb751305424892411c6ebd876b38752fd70b8c5c74a8b14c493973ed31688055,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713805318561774948,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7559bf459f-jx57l,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 05d7c185-fcb3-4db1-941f-58c4cf86a75f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 4680ca88,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f302e7c44f2215a4eb4dc05c17696313ac15c01877ceeb4d34d0b451097e36ae,PodSandboxId:9b6c83eadeecd540eb3d823a28df9c9191d19e1349bb76d7ee640f5ace8fd487,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:11d76b979f02dc27a70e18a7d6de3451ce604f88dba049d4aa2b95225bb4c9ba,State:CONTAINER_RUNNING,CreatedAt:1713805229544862244,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 54f74d8d-0de6-4905-8880-cdc716c944b3,},Annotations:map[string]string{io.kubernetes.container.hash: 7742d2c,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac,PodSandboxId:e7dc89ac1feef0671d5d95fff08c7b88cf4313ffb379e61507b3fc92b599af4c,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713805200007867342,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-hb6nw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 616d4a54-5dfb-45cc-9b0a-a2461bbdc3e8,},Annotations:map[string]string{io.kubernetes.container.hash: ff242c65,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b992ddaec1770dd27b6fcf280f55f128597b66fded6028ed19bd06581c6a7af4,PodSandboxId:f4386e68702ef1455c642103ff9814ba90f77b0f7ef5e4021e8659af4c45530f,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:171380
5176582586683,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-dqx5m,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9,},Annotations:map[string]string{io.kubernetes.container.hash: ecfe4f92,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a,PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1713805167902556409,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-9rwbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be72f5e4-ae81-48d6-b57f-d9640e75904a,},Annotations:map[string]string{io.kubernetes.container.hash: dde1c527,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0,PodSandboxId:9a83f28ec25a17677c9a3dbe8eaf24730d2f8678d9e0d72053a42122b232eb0e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f561734
2c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805116439144899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eddb4fb4-7de5-44ef-9bac-3930ce87160c,},Annotations:map[string]string{io.kubernetes.container.hash: a017758,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839,PodSandboxId:c16f432ead2fcea0e9e3dc709ce1f66044a91451e425e92bb0e3b50e8b8fd5d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0079
7ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805114646949234,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9kl4l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46deec4f-c97e-48aa-b1ca-9c679e0a64e2,},Annotations:map[string]string{io.kubernetes.container.hash: 85712065,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160,PodSandb
oxId:6a5e4353dde8f7e0fadf7a0693d9c624a94121396d4daa820ffb5ed996ef7e32,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805111283702745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbd87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b08b8c4d-9f59-4f64-8503-e5d055487f74,},Annotations:map[string]string{io.kubernetes.container.hash: 4d716e8c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d,PodSandboxId:aa58a6a1d4ea9b4ae0775732d914
9c31b5d9f97c57149b082c6a5ba21fd7d06a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805090322229442,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d903f02e20fa6303480e5550d5ff53c6,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09,PodSandboxId:ed579bd1a44698e8db
b907cd5e2b51bbcc0b49cf0b9581ea5a0502d71b9a3462,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713805090294253425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a435bd17231af3fb00e256e0ac8b418,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e,PodSandboxId:9c1684f3b30a58043ea007f2cf40ec39dc7
abe196b588d110a0be598ded10ee9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713805090237410245,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da70b88177cb739840d9af5145b71cd6,},Annotations:map[string]string{io.kubernetes.container.hash: 48ba8371,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0,PodSandboxId:37d43ed8797170034a3fe41c1f1b7b1de3a9600a846a92f67d2a
7cfd4d831e11,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713805090204249501,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-934361,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c66e348c58730d7efb8ebd6834f7506,},Annotations:map[string]string{io.kubernetes.container.hash: 68bbc046,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43b64c86-bb5e-4ce4-8af1-a2c188ce551a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.073531946Z" level=debug msg="Found exit code for 0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a: 0" file="oci/runtime_oci.go:1022"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.074345963Z" level=debug msg="Skipping status update for: &{State:{Version:1.0.2-dev ID:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:dde1c527 io.kubernetes.container.name:metrics-server io.kubernetes.container.ports:[{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}] io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{\"io.kubernetes.container.hash\":\"dde1c527\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"https\\\",\\\"containerPort\\\":4443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.c
ontainer.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"} io.kubernetes.cri-o.ContainerID:0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-04-22T16:59:27.902653083Z io.kubernetes.cri-o.IP.0:10.244.0.9 io.kubernetes.cri-o.Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 io.kubernetes.cri-o.ImageName:registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a io.kubernetes.cri-o.ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62 io.kubernetes.cri-o.Labels:{\"io.kubernetes.container.name\":\"metrics-server\",\"io.kubernetes.pod.name\":\"metrics-server-c59844bb4-9rwbq\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"be72f5e4-a
e81-48d6-b57f-d9640e75904a\"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_metrics-server-c59844bb4-9rwbq_be72f5e4-ae81-48d6-b57f-d9640e75904a/metrics-server/0.log io.kubernetes.cri-o.Metadata:{\"name\":\"metrics-server\"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/db2296d7d449d50389556c475bbd0c1ac262aea1811023f266da0b17ff33e2c1/merged io.kubernetes.cri-o.Name:k8s_metrics-server_metrics-server-c59844bb4-9rwbq_kube-system_be72f5e4-ae81-48d6-b57f-d9640e75904a_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/var/run/containers/storage/overlay-containers/953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3 io.kubernetes.cri-o.SandboxName:k8s_metrics-server-c59844bb4-9rwbq_kube-system_be72f5e4-ae81-48d6-b57f-d9640e75904a_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOn
ce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{\"container_path\":\"/tmp\",\"host_path\":\"/var/lib/kubelet/pods/be72f5e4-ae81-48d6-b57f-d9640e75904a/volumes/kubernetes.io~empty-dir/tmp-dir\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/be72f5e4-ae81-48d6-b57f-d9640e75904a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/be72f5e4-ae81-48d6-b57f-d9640e75904a/containers/metrics-server/85f1b159\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/be72f5e4-ae81-48d6-b57f-d9640e75904a/volumes/kubernetes.io~projected/kube-api-access-2bvpn\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}] io.kubernetes.pod.name:metrics-server-c59844bb4-9rwbq io.kubernetes.pod.na
mespace:kube-system io.kubernetes.pod.terminationGracePeriod:30 io.kubernetes.pod.uid:be72f5e4-ae81-48d6-b57f-d9640e75904a kubernetes.io/config.seen:2024-04-22T16:58:35.938291574Z kubernetes.io/config.source:api]} Created:2024-04-22 16:59:27.954529377 +0000 UTC Started:2024-04-22 16:59:27.983157809 +0000 UTC m=+88.488377104 Finished:2024-04-22 17:05:04.012036339 +0000 UTC ExitCode:0xc00117d3c0 OOMKilled:false SeccompKilled:false Error: InitPid:4960 InitStartTime:11051 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}" file="oci/runtime_oci.go:946" id=5c655c6f-0d7e-4327-b0b6-af2a8012d741 name=/runtime.v1.RuntimeService/StopContainer
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.077722261Z" level=debug msg="Event: REMOVE        \"/var/run/crio/exits/0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a\"" file="server/server.go:805"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.078313325Z" level=info msg="Stopped container 0b79999aaa7570cf41ba556f028fb5e066d874009aa187941b0daaf86234b15a: kube-system/metrics-server-c59844bb4-9rwbq/metrics-server" file="server/container_stop.go:29" id=5c655c6f-0d7e-4327-b0b6-af2a8012d741 name=/runtime.v1.RuntimeService/StopContainer
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.078422027Z" level=debug msg="Response: &StopContainerResponse{}" file="otel-collector/interceptors.go:74" id=5c655c6f-0d7e-4327-b0b6-af2a8012d741 name=/runtime.v1.RuntimeService/StopContainer
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.078974794Z" level=debug msg="Request: &StopPodSandboxRequest{PodSandboxId:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3,}" file="otel-collector/interceptors.go:62" id=477ffc53-9029-4c96-870b-c743c9603d7f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.079098422Z" level=info msg="Stopping pod sandbox: 953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3" file="server/sandbox_stop.go:18" id=477ffc53-9029-4c96-870b-c743c9603d7f name=/runtime.v1.RuntimeService/StopPodSandbox
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.079697492Z" level=info msg="Got pod network &{Name:metrics-server-c59844bb4-9rwbq Namespace:kube-system ID:953e3f94691921deab44d764cdab924e1e97b8f4c0955f8259cd549d891553a3 UID:be72f5e4-ae81-48d6-b57f-d9640e75904a NetNS:/var/run/netns/e6b513e2-8b68-4ef1-bad2-a1be9ee78734 Networks:[{Name:bridge Ifname:eth0}] RuntimeConfig:map[bridge:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:/kubepods/burstable/podbe72f5e4-ae81-48d6-b57f-d9640e75904a PodAnnotations:0xc001fd6128}] Aliases:map[]}" file="ocicni/ocicni.go:795"
	Apr 22 17:05:04 addons-934361 crio[687]: time="2024-04-22 17:05:04.079961459Z" level=info msg="Deleting pod kube-system_metrics-server-c59844bb4-9rwbq from CNI network \"bridge\" (type=bridge)" file="ocicni/ocicni.go:667"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9df277752128a       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 2 minutes ago       Running             hello-world-app           0                   2c3dc4771468f       hello-world-app-86c47465fc-zdkzg
	9926944e9d157       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                   3 minutes ago       Running             headlamp                  0                   bb75130542489       headlamp-7559bf459f-jx57l
	f302e7c44f221       docker.io/library/nginx@sha256:542a383900f6fdcc90e1b7341f6889145d43f839f35f608b4de7821a77ca54d9                         4 minutes ago       Running             nginx                     0                   9b6c83eadeecd       nginx
	b1a1ac745bcf0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            5 minutes ago       Running             gcp-auth                  0                   e7dc89ac1feef       gcp-auth-5db96cd9b4-hb6nw
	b992ddaec1770       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         5 minutes ago       Running             yakd                      0                   f4386e68702ef       yakd-dashboard-5ddbf7d777-dqx5m
	0b79999aaa757       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   5 minutes ago       Exited              metrics-server            0                   953e3f9469192       metrics-server-c59844bb4-9rwbq
	b30c9c1cea934       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        6 minutes ago       Running             storage-provisioner       0                   9a83f28ec25a1       storage-provisioner
	fb7076488504b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        6 minutes ago       Running             coredns                   0                   c16f432ead2fc       coredns-7db6d8ff4d-9kl4l
	2bee9d70cd4bd       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                        6 minutes ago       Running             kube-proxy                0                   6a5e4353dde8f       kube-proxy-zbd87
	63ef320e6c20d       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                        6 minutes ago       Running             kube-controller-manager   0                   aa58a6a1d4ea9       kube-controller-manager-addons-934361
	6bfe236d105ed       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                        6 minutes ago       Running             kube-scheduler            0                   ed579bd1a4469       kube-scheduler-addons-934361
	f8c3dd43dc9c6       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                        6 minutes ago       Running             kube-apiserver            0                   9c1684f3b30a5       kube-apiserver-addons-934361
	0cef8fab46a0f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        6 minutes ago       Running             etcd                      0                   37d43ed879717       etcd-addons-934361
	
	
	==> coredns [fb7076488504b9b8a334d5583c8123a965a55b46c20061a958f52ddab6736839] <==
	[INFO] 10.244.0.7:49911 - 48847 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00070284s
	[INFO] 10.244.0.7:56817 - 9106 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000144253s
	[INFO] 10.244.0.7:56817 - 52126 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000071517s
	[INFO] 10.244.0.7:37733 - 62594 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000069402s
	[INFO] 10.244.0.7:37733 - 52100 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000133197s
	[INFO] 10.244.0.7:46105 - 5339 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000088529s
	[INFO] 10.244.0.7:46105 - 5337 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000134415s
	[INFO] 10.244.0.7:37067 - 34336 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000184296s
	[INFO] 10.244.0.7:37067 - 52783 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000273801s
	[INFO] 10.244.0.7:60120 - 63077 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000068555s
	[INFO] 10.244.0.7:60120 - 54371 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000032454s
	[INFO] 10.244.0.7:58709 - 62195 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032319s
	[INFO] 10.244.0.7:58709 - 47601 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059034s
	[INFO] 10.244.0.7:36499 - 35445 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000063831s
	[INFO] 10.244.0.7:36499 - 62347 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000026885s
	[INFO] 10.244.0.22:47401 - 13695 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000398656s
	[INFO] 10.244.0.22:60736 - 34009 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000102179s
	[INFO] 10.244.0.22:39459 - 54337 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000098307s
	[INFO] 10.244.0.22:33691 - 1173 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000241831s
	[INFO] 10.244.0.22:48377 - 24442 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112591s
	[INFO] 10.244.0.22:52282 - 23413 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00005398s
	[INFO] 10.244.0.22:53445 - 36897 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000639433s
	[INFO] 10.244.0.22:50030 - 18100 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001006445s
	[INFO] 10.244.0.23:36304 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002736466s
	[INFO] 10.244.0.23:37709 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000237898s
	
	
	==> describe nodes <==
	Name:               addons-934361
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-934361
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=addons-934361
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T16_58_16_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-934361
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 16:58:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-934361
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:05:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:03:21 +0000   Mon, 22 Apr 2024 16:58:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:03:21 +0000   Mon, 22 Apr 2024 16:58:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:03:21 +0000   Mon, 22 Apr 2024 16:58:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:03:21 +0000   Mon, 22 Apr 2024 16:58:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.135
	  Hostname:    addons-934361
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2706d446b56941c5901b11db32cb61d2
	  System UUID:                2706d446-b569-41c5-901b-11db32cb61d2
	  Boot ID:                    6bc823c6-7e50-4cce-bb78-65a464b0a746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-zdkzg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  gcp-auth                    gcp-auth-5db96cd9b4-hb6nw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  headlamp                    headlamp-7559bf459f-jx57l                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 coredns-7db6d8ff4d-9kl4l                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m35s
	  kube-system                 etcd-addons-934361                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-apiserver-addons-934361             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-controller-manager-addons-934361    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-proxy-zbd87                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 kube-scheduler-addons-934361             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-dqx5m          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m32s  kube-proxy       
	  Normal  Starting                 6m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m49s  kubelet          Node addons-934361 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m49s  kubelet          Node addons-934361 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m49s  kubelet          Node addons-934361 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m48s  kubelet          Node addons-934361 status is now: NodeReady
	  Normal  RegisteredNode           6m36s  node-controller  Node addons-934361 event: Registered Node addons-934361 in Controller
	
	
	==> dmesg <==
	[  +5.338820] kauditd_printk_skb: 106 callbacks suppressed
	[ +14.011100] kauditd_printk_skb: 5 callbacks suppressed
	[Apr22 16:59] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.123053] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.483322] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.061613] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.080444] kauditd_printk_skb: 58 callbacks suppressed
	[  +6.606015] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.518867] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 17:00] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.332821] kauditd_printk_skb: 29 callbacks suppressed
	[ +11.738701] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.258583] kauditd_printk_skb: 32 callbacks suppressed
	[ +18.635768] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.501461] kauditd_printk_skb: 15 callbacks suppressed
	[Apr22 17:01] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.889614] kauditd_printk_skb: 2 callbacks suppressed
	[ +15.915807] kauditd_printk_skb: 6 callbacks suppressed
	[  +8.870628] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.501543] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.600825] kauditd_printk_skb: 24 callbacks suppressed
	[Apr22 17:02] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.117678] kauditd_printk_skb: 7 callbacks suppressed
	[ +40.135246] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.172720] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [0cef8fab46a0ffeaaa6f1177450229a0d065714a39adcc9aa6fa683bab7db1e0] <==
	{"level":"info","ts":"2024-04-22T16:59:24.630749Z","caller":"traceutil/trace.go:171","msg":"trace[1361596995] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:939; }","duration":"119.03451ms","start":"2024-04-22T16:59:24.511703Z","end":"2024-04-22T16:59:24.630737Z","steps":["trace[1361596995] 'agreement among raft nodes before linearized reading'  (duration: 118.801891ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T16:59:24.630924Z","caller":"traceutil/trace.go:171","msg":"trace[1125529670] transaction","detail":"{read_only:false; response_revision:939; number_of_response:1; }","duration":"175.076823ms","start":"2024-04-22T16:59:24.45584Z","end":"2024-04-22T16:59:24.630917Z","steps":["trace[1125529670] 'process raft request'  (duration: 174.379144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T16:59:24.631539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.705606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85134"}
	{"level":"info","ts":"2024-04-22T16:59:24.631593Z","caller":"traceutil/trace.go:171","msg":"trace[1891859423] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:939; }","duration":"115.782682ms","start":"2024-04-22T16:59:24.515802Z","end":"2024-04-22T16:59:24.631585Z","steps":["trace[1891859423] 'agreement among raft nodes before linearized reading'  (duration: 115.583567ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T16:59:46.664576Z","caller":"traceutil/trace.go:171","msg":"trace[1071014651] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"108.091796ms","start":"2024-04-22T16:59:46.556378Z","end":"2024-04-22T16:59:46.66447Z","steps":["trace[1071014651] 'process raft request'  (duration: 106.351844ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T16:59:56.79784Z","caller":"traceutil/trace.go:171","msg":"trace[36495517] transaction","detail":"{read_only:false; response_revision:1104; number_of_response:1; }","duration":"245.259788ms","start":"2024-04-22T16:59:56.552564Z","end":"2024-04-22T16:59:56.797824Z","steps":["trace[36495517] 'process raft request'  (duration: 244.931014ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:00:06.288347Z","caller":"traceutil/trace.go:171","msg":"trace[25965756] transaction","detail":"{read_only:false; response_revision:1161; number_of_response:1; }","duration":"285.750524ms","start":"2024-04-22T17:00:06.00258Z","end":"2024-04-22T17:00:06.28833Z","steps":["trace[25965756] 'process raft request'  (duration: 285.599339ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:17.677774Z","caller":"traceutil/trace.go:171","msg":"trace[337112495] linearizableReadLoop","detail":"{readStateIndex:1542; appliedIndex:1541; }","duration":"169.98594ms","start":"2024-04-22T17:01:17.507747Z","end":"2024-04-22T17:01:17.677733Z","steps":["trace[337112495] 'read index received'  (duration: 169.862954ms)","trace[337112495] 'applied index is now lower than readState.Index'  (duration: 122.556µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:01:17.678158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.340225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-04-22T17:01:17.678232Z","caller":"traceutil/trace.go:171","msg":"trace[1168414504] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1479; }","duration":"170.500449ms","start":"2024-04-22T17:01:17.507722Z","end":"2024-04-22T17:01:17.678222Z","steps":["trace[1168414504] 'agreement among raft nodes before linearized reading'  (duration: 170.225909ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:17.678462Z","caller":"traceutil/trace.go:171","msg":"trace[1112686816] transaction","detail":"{read_only:false; response_revision:1479; number_of_response:1; }","duration":"198.382208ms","start":"2024-04-22T17:01:17.480072Z","end":"2024-04-22T17:01:17.678455Z","steps":["trace[1112686816] 'process raft request'  (duration: 197.581777ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:23.851235Z","caller":"traceutil/trace.go:171","msg":"trace[1269274059] transaction","detail":"{read_only:false; response_revision:1492; number_of_response:1; }","duration":"143.049819ms","start":"2024-04-22T17:01:23.708167Z","end":"2024-04-22T17:01:23.851217Z","steps":["trace[1269274059] 'process raft request'  (duration: 142.948726ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:48.979427Z","caller":"traceutil/trace.go:171","msg":"trace[694671986] linearizableReadLoop","detail":"{readStateIndex:1743; appliedIndex:1742; }","duration":"164.955934ms","start":"2024-04-22T17:01:48.814458Z","end":"2024-04-22T17:01:48.979414Z","steps":["trace[694671986] 'read index received'  (duration: 164.757811ms)","trace[694671986] 'applied index is now lower than readState.Index'  (duration: 197.701µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:01:48.979543Z","caller":"traceutil/trace.go:171","msg":"trace[1874466795] transaction","detail":"{read_only:false; response_revision:1670; number_of_response:1; }","duration":"171.99275ms","start":"2024-04-22T17:01:48.807539Z","end":"2024-04-22T17:01:48.979532Z","steps":["trace[1874466795] 'process raft request'  (duration: 171.749955ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:01:48.979923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.451886ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6125"}
	{"level":"info","ts":"2024-04-22T17:01:48.980076Z","caller":"traceutil/trace.go:171","msg":"trace[1194715953] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1670; }","duration":"165.632526ms","start":"2024-04-22T17:01:48.814435Z","end":"2024-04-22T17:01:48.980067Z","steps":["trace[1194715953] 'agreement among raft nodes before linearized reading'  (duration: 165.406916ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:01:58.460354Z","caller":"traceutil/trace.go:171","msg":"trace[913897160] linearizableReadLoop","detail":"{readStateIndex:1833; appliedIndex:1832; }","duration":"166.390696ms","start":"2024-04-22T17:01:58.29395Z","end":"2024-04-22T17:01:58.460341Z","steps":["trace[913897160] 'read index received'  (duration: 166.242883ms)","trace[913897160] 'applied index is now lower than readState.Index'  (duration: 147.392µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:01:58.460456Z","caller":"traceutil/trace.go:171","msg":"trace[544735588] transaction","detail":"{read_only:false; response_revision:1757; number_of_response:1; }","duration":"430.169114ms","start":"2024-04-22T17:01:58.03028Z","end":"2024-04-22T17:01:58.460449Z","steps":["trace[544735588] 'process raft request'  (duration: 429.952907ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:01:58.460571Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T17:01:58.030267Z","time spent":"430.21481ms","remote":"127.0.0.1:35118","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1755 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-22T17:01:58.460851Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.894158ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T17:01:58.460876Z","caller":"traceutil/trace.go:171","msg":"trace[480731048] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1757; }","duration":"166.943114ms","start":"2024-04-22T17:01:58.293926Z","end":"2024-04-22T17:01:58.460869Z","steps":["trace[480731048] 'agreement among raft nodes before linearized reading'  (duration: 166.898748ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:02:34.792857Z","caller":"traceutil/trace.go:171","msg":"trace[967922381] linearizableReadLoop","detail":"{readStateIndex:1913; appliedIndex:1912; }","duration":"139.296945ms","start":"2024-04-22T17:02:34.653518Z","end":"2024-04-22T17:02:34.792815Z","steps":["trace[967922381] 'read index received'  (duration: 139.066603ms)","trace[967922381] 'applied index is now lower than readState.Index'  (duration: 229.413µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:02:34.793165Z","caller":"traceutil/trace.go:171","msg":"trace[169896204] transaction","detail":"{read_only:false; response_revision:1829; number_of_response:1; }","duration":"143.003767ms","start":"2024-04-22T17:02:34.650139Z","end":"2024-04-22T17:02:34.793143Z","steps":["trace[169896204] 'process raft request'  (duration: 142.489075ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:02:34.793377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.78288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.135\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-04-22T17:02:34.793448Z","caller":"traceutil/trace.go:171","msg":"trace[35807185] range","detail":"{range_begin:/registry/masterleases/192.168.39.135; range_end:; response_count:1; response_revision:1829; }","duration":"139.9362ms","start":"2024-04-22T17:02:34.653496Z","end":"2024-04-22T17:02:34.793432Z","steps":["trace[35807185] 'agreement among raft nodes before linearized reading'  (duration: 139.631837ms)"],"step_count":1}
	
	
	==> gcp-auth [b1a1ac745bcf0fc86b0d7c234acd0754aa181fead897f10bb9acfdbf086da6ac] <==
	2024/04/22 17:00:00 GCP Auth Webhook started!
	2024/04/22 17:00:19 Ready to marshal response ...
	2024/04/22 17:00:19 Ready to write response ...
	2024/04/22 17:00:25 Ready to marshal response ...
	2024/04/22 17:00:25 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:00:53 Ready to marshal response ...
	2024/04/22 17:00:53 Ready to write response ...
	2024/04/22 17:01:11 Ready to marshal response ...
	2024/04/22 17:01:11 Ready to write response ...
	2024/04/22 17:01:34 Ready to marshal response ...
	2024/04/22 17:01:34 Ready to write response ...
	2024/04/22 17:01:52 Ready to marshal response ...
	2024/04/22 17:01:52 Ready to write response ...
	2024/04/22 17:01:52 Ready to marshal response ...
	2024/04/22 17:01:52 Ready to write response ...
	2024/04/22 17:01:52 Ready to marshal response ...
	2024/04/22 17:01:52 Ready to write response ...
	2024/04/22 17:02:00 Ready to marshal response ...
	2024/04/22 17:02:00 Ready to write response ...
	2024/04/22 17:02:45 Ready to marshal response ...
	2024/04/22 17:02:45 Ready to write response ...
	
	
	==> kernel <==
	 17:05:04 up 7 min,  0 users,  load average: 0.21, 0.98, 0.64
	Linux addons-934361 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f8c3dd43dc9c6e080ef84786932c5ddc4cf45f63eec9e1a952d49ce9201e443e] <==
	W0422 17:00:36.583739       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 17:00:36.583941       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0422 17:00:36.584707       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.140.247:443: connect: connection refused
	E0422 17:00:36.589647       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.140.247:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.140.247:443: connect: connection refused
	I0422 17:00:36.651949       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0422 17:01:09.807581       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0422 17:01:26.339156       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0422 17:01:51.434856       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.434900       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.467718       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.467778       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.478486       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.479048       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.500328       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.500388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:01:51.513264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:01:51.513315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0422 17:01:52.480525       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0422 17:01:52.514175       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0422 17:01:52.541447       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0422 17:01:52.690628       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.15.75"}
	E0422 17:02:04.374686       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.135:8443->10.244.0.31:37752: read: connection reset by peer
	I0422 17:02:46.095620       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.11.199"}
	E0422 17:02:48.379811       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [63ef320e6c20d1144e019e55e3c7a845e397740786a316bc597d183096a22e6d] <==
	E0422 17:02:57.152924       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:02:58.090097       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W0422 17:03:04.366981       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:03:04.367082       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:03:07.135297       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:03:07.135407       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:03:23.411977       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:03:23.412088       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:03:31.698675       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:03:31.698791       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:03:46.794730       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:03:46.794915       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:03:52.812926       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:03:52.813148       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:04:08.811227       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:04:08.811304       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:04:18.054388       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:04:18.054500       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:04:18.955643       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:04:18.955706       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:04:42.114387       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:04:42.114478       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:04:43.377319       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:04:43.377476       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:05:02.882794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="18.788µs"
	
	
	==> kube-proxy [2bee9d70cd4bdfff2572317a86965419481dbf845c4ddc5ef74c135e769a2160] <==
	I0422 16:58:32.286126       1 server_linux.go:69] "Using iptables proxy"
	I0422 16:58:32.303116       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.135"]
	I0422 16:58:32.375432       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 16:58:32.375468       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 16:58:32.375484       1 server_linux.go:165] "Using iptables Proxier"
	I0422 16:58:32.381266       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 16:58:32.381413       1 server.go:872] "Version info" version="v1.30.0"
	I0422 16:58:32.381424       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 16:58:32.384613       1 config.go:192] "Starting service config controller"
	I0422 16:58:32.384625       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 16:58:32.384656       1 config.go:101] "Starting endpoint slice config controller"
	I0422 16:58:32.384659       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 16:58:32.384986       1 config.go:319] "Starting node config controller"
	I0422 16:58:32.384993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 16:58:32.484756       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 16:58:32.484844       1 shared_informer.go:320] Caches are synced for service config
	I0422 16:58:32.488992       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6bfe236d105edaf83bac0758ad1c7b9853e21c1705e96f999f3441c2bd607e09] <==
	W0422 16:58:13.901510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 16:58:13.901622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 16:58:13.907903       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 16:58:13.908118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 16:58:13.913776       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 16:58:13.913827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 16:58:13.968945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:13.969122       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:14.012304       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 16:58:14.012571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 16:58:14.047166       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 16:58:14.047321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 16:58:14.086948       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:14.087059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:14.130758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 16:58:14.131208       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 16:58:14.131745       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:14.131875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:14.166989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 16:58:14.168104       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 16:58:14.208731       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 16:58:14.208783       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 16:58:14.340276       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 16:58:14.340389       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 16:58:17.376185       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.601312    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a856af-bd62-4050-b05b-81914b90e27e-kube-api-access-j6x6v" (OuterVolumeSpecName: "kube-api-access-j6x6v") pod "f4a856af-bd62-4050-b05b-81914b90e27e" (UID: "f4a856af-bd62-4050-b05b-81914b90e27e"). InnerVolumeSpecName "kube-api-access-j6x6v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.602480    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a856af-bd62-4050-b05b-81914b90e27e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f4a856af-bd62-4050-b05b-81914b90e27e" (UID: "f4a856af-bd62-4050-b05b-81914b90e27e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.699350    1286 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-j6x6v\" (UniqueName: \"kubernetes.io/projected/f4a856af-bd62-4050-b05b-81914b90e27e-kube-api-access-j6x6v\") on node \"addons-934361\" DevicePath \"\""
	Apr 22 17:02:51 addons-934361 kubelet[1286]: I0422 17:02:51.699384    1286 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f4a856af-bd62-4050-b05b-81914b90e27e-webhook-cert\") on node \"addons-934361\" DevicePath \"\""
	Apr 22 17:02:53 addons-934361 kubelet[1286]: I0422 17:02:53.577191    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a856af-bd62-4050-b05b-81914b90e27e" path="/var/lib/kubelet/pods/f4a856af-bd62-4050-b05b-81914b90e27e/volumes"
	Apr 22 17:03:15 addons-934361 kubelet[1286]: E0422 17:03:15.604869    1286 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:03:15 addons-934361 kubelet[1286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:03:15 addons-934361 kubelet[1286]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:03:15 addons-934361 kubelet[1286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:03:15 addons-934361 kubelet[1286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:03:18 addons-934361 kubelet[1286]: I0422 17:03:18.605364    1286 scope.go:117] "RemoveContainer" containerID="82d4057c59bb2224b137638b04e782f5f48d6ae9ae4fc2a1fc61b6e95b6bd1a3"
	Apr 22 17:03:18 addons-934361 kubelet[1286]: I0422 17:03:18.628247    1286 scope.go:117] "RemoveContainer" containerID="71f53d8807307fb58fd40a6301d224dba6301156f19ff29d351a9adba4825e3d"
	Apr 22 17:03:18 addons-934361 kubelet[1286]: I0422 17:03:18.644991    1286 scope.go:117] "RemoveContainer" containerID="a6962942a6d92d4b0e2d100987670d7b3ddc655dbd50d9ba62da86116de79867"
	Apr 22 17:04:15 addons-934361 kubelet[1286]: E0422 17:04:15.606170    1286 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:04:15 addons-934361 kubelet[1286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:04:15 addons-934361 kubelet[1286]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:04:15 addons-934361 kubelet[1286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:04:15 addons-934361 kubelet[1286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:05:02 addons-934361 kubelet[1286]: I0422 17:05:02.917122    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-zdkzg" podStartSLOduration=134.947127832 podStartE2EDuration="2m17.917070387s" podCreationTimestamp="2024-04-22 17:02:45 +0000 UTC" firstStartedPulling="2024-04-22 17:02:46.554703476 +0000 UTC m=+271.154759343" lastFinishedPulling="2024-04-22 17:02:49.52464603 +0000 UTC m=+274.124701898" observedRunningTime="2024-04-22 17:02:50.402096037 +0000 UTC m=+275.002151923" watchObservedRunningTime="2024-04-22 17:05:02.917070387 +0000 UTC m=+407.517126270"
	Apr 22 17:05:04 addons-934361 kubelet[1286]: I0422 17:05:04.322608    1286 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be72f5e4-ae81-48d6-b57f-d9640e75904a-tmp-dir\") pod \"be72f5e4-ae81-48d6-b57f-d9640e75904a\" (UID: \"be72f5e4-ae81-48d6-b57f-d9640e75904a\") "
	Apr 22 17:05:04 addons-934361 kubelet[1286]: I0422 17:05:04.322680    1286 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvpn\" (UniqueName: \"kubernetes.io/projected/be72f5e4-ae81-48d6-b57f-d9640e75904a-kube-api-access-2bvpn\") pod \"be72f5e4-ae81-48d6-b57f-d9640e75904a\" (UID: \"be72f5e4-ae81-48d6-b57f-d9640e75904a\") "
	Apr 22 17:05:04 addons-934361 kubelet[1286]: I0422 17:05:04.323768    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be72f5e4-ae81-48d6-b57f-d9640e75904a-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "be72f5e4-ae81-48d6-b57f-d9640e75904a" (UID: "be72f5e4-ae81-48d6-b57f-d9640e75904a"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Apr 22 17:05:04 addons-934361 kubelet[1286]: I0422 17:05:04.334821    1286 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be72f5e4-ae81-48d6-b57f-d9640e75904a-kube-api-access-2bvpn" (OuterVolumeSpecName: "kube-api-access-2bvpn") pod "be72f5e4-ae81-48d6-b57f-d9640e75904a" (UID: "be72f5e4-ae81-48d6-b57f-d9640e75904a"). InnerVolumeSpecName "kube-api-access-2bvpn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 17:05:04 addons-934361 kubelet[1286]: I0422 17:05:04.423772    1286 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/be72f5e4-ae81-48d6-b57f-d9640e75904a-tmp-dir\") on node \"addons-934361\" DevicePath \"\""
	Apr 22 17:05:04 addons-934361 kubelet[1286]: I0422 17:05:04.423868    1286 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2bvpn\" (UniqueName: \"kubernetes.io/projected/be72f5e4-ae81-48d6-b57f-d9640e75904a-kube-api-access-2bvpn\") on node \"addons-934361\" DevicePath \"\""
	
	
	==> storage-provisioner [b30c9c1cea934b809472df5b185500d4b66e8f38584d17ccdf603e2b650758d0] <==
	I0422 16:58:36.971074       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 16:58:37.176696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 16:58:37.176739       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 16:58:37.230118       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 16:58:37.230279       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-934361_8fb7efa7-a319-4388-ae68-1203787d6366!
	I0422 16:58:37.279902       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"53255fb8-5ed2-42af-aa8d-2f49cb24b17b", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-934361_8fb7efa7-a319-4388-ae68-1203787d6366 became leader
	I0422 16:58:37.431300       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-934361_8fb7efa7-a319-4388-ae68-1203787d6366!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-934361 -n addons-934361
helpers_test.go:261: (dbg) Run:  kubectl --context addons-934361 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (297.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-934361
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-934361: exit status 82 (2m0.480655071s)

                                                
                                                
-- stdout --
	* Stopping node "addons-934361"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-934361" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-934361
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-934361: exit status 11 (21.606968245s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.135:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-934361" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-934361
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-934361: exit status 11 (6.144553908s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.135:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-934361" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-934361
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-934361: exit status 11 (6.144008459s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.135:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-934361" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-005894 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-005894 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-005894 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-005894 --alsologtostderr -v=1] stderr:
I0422 17:11:49.130444   28378 out.go:291] Setting OutFile to fd 1 ...
I0422 17:11:49.130622   28378 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:49.130632   28378 out.go:304] Setting ErrFile to fd 2...
I0422 17:11:49.130637   28378 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:49.130829   28378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
I0422 17:11:49.131057   28378 mustload.go:65] Loading cluster: functional-005894
I0422 17:11:49.131457   28378 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:49.131805   28378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:49.131843   28378 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:49.148277   28378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
I0422 17:11:49.148757   28378 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:49.149382   28378 main.go:141] libmachine: Using API Version  1
I0422 17:11:49.149406   28378 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:49.149826   28378 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:49.150037   28378 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:49.151853   28378 host.go:66] Checking if "functional-005894" exists ...
I0422 17:11:49.152210   28378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:49.152246   28378 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:49.170449   28378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
I0422 17:11:49.171013   28378 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:49.171694   28378 main.go:141] libmachine: Using API Version  1
I0422 17:11:49.171714   28378 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:49.172108   28378 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:49.172276   28378 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:49.172406   28378 api_server.go:166] Checking apiserver status ...
I0422 17:11:49.172458   28378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0422 17:11:49.172492   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:49.175958   28378 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:49.176650   28378 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:49.176776   28378 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:49.176985   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:49.177184   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:49.177364   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:49.177549   28378 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:49.278340   28378 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5158/cgroup
W0422 17:11:49.289559   28378 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5158/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0422 17:11:49.289624   28378 ssh_runner.go:195] Run: ls
I0422 17:11:49.294664   28378 api_server.go:253] Checking apiserver healthz at https://192.168.39.154:8441/healthz ...
I0422 17:11:49.299005   28378 api_server.go:279] https://192.168.39.154:8441/healthz returned 200:
ok
W0422 17:11:49.299051   28378 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0422 17:11:49.299251   28378 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:49.299267   28378 addons.go:69] Setting dashboard=true in profile "functional-005894"
I0422 17:11:49.299273   28378 addons.go:234] Setting addon dashboard=true in "functional-005894"
I0422 17:11:49.299297   28378 host.go:66] Checking if "functional-005894" exists ...
I0422 17:11:49.299565   28378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:49.299601   28378 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:49.314229   28378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
I0422 17:11:49.314716   28378 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:49.315231   28378 main.go:141] libmachine: Using API Version  1
I0422 17:11:49.315252   28378 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:49.315562   28378 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:49.316021   28378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:49.316053   28378 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:49.331605   28378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43921
I0422 17:11:49.332019   28378 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:49.332479   28378 main.go:141] libmachine: Using API Version  1
I0422 17:11:49.332505   28378 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:49.332792   28378 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:49.333019   28378 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:49.334801   28378 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:49.359994   28378 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0422 17:11:49.361715   28378 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0422 17:11:49.363326   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0422 17:11:49.363343   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0422 17:11:49.363363   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:49.366625   28378 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:49.367079   28378 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:49.367109   28378 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:49.367279   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:49.367501   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:49.367661   28378 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:49.367822   28378 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:49.465365   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0422 17:11:49.465385   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0422 17:11:49.488043   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0422 17:11:49.488069   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0422 17:11:49.508688   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0422 17:11:49.508714   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0422 17:11:49.527425   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0422 17:11:49.527445   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0422 17:11:49.550809   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0422 17:11:49.550830   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0422 17:11:49.571598   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0422 17:11:49.571627   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0422 17:11:49.594464   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0422 17:11:49.594488   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0422 17:11:49.622607   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0422 17:11:49.622635   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0422 17:11:49.649919   28378 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0422 17:11:49.649939   28378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0422 17:11:49.676742   28378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0422 17:11:50.954329   28378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.277542685s)
I0422 17:11:50.954418   28378 main.go:141] libmachine: Making call to close driver server
I0422 17:11:50.954439   28378 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:50.954755   28378 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:50.954773   28378 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:50.954783   28378 main.go:141] libmachine: Making call to close driver server
I0422 17:11:50.954795   28378 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:50.954994   28378 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:50.955015   28378 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:50.955043   28378 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:11:50.956694   28378 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-005894 addons enable metrics-server

                                                
                                                
I0422 17:11:50.958066   28378 addons.go:197] Writing out "functional-005894" config to set dashboard=true...
W0422 17:11:50.958305   28378 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0422 17:11:50.958956   28378 kapi.go:59] client config for functional-005894: &rest.Config{Host:"https://192.168.39.154:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0422 17:11:50.972082   28378 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  f75c6b6d-38df-4dd8-b86b-aea76561c535 809 0 2024-04-22 17:11:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-04-22 17:11:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.103.212.241,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.103.212.241],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0422 17:11:50.972199   28378 out.go:239] * Launching proxy ...
* Launching proxy ...
I0422 17:11:50.972270   28378 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-005894 proxy --port 36195]
I0422 17:11:50.972534   28378 dashboard.go:157] Waiting for kubectl to output host:port ...
I0422 17:11:51.013400   28378 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0422 17:11:51.013443   28378 out.go:239] * Verifying proxy health ...
* Verifying proxy health ...
I0422 17:11:51.052131   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f937ae01-7b16-4e54-8625-c9748309c332] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002125400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b1b00 TLS:<nil>}
I0422 17:11:51.052215   28378 retry.go:31] will retry after 101.819µs: Temporary Error: unexpected response code: 503
I0422 17:11:51.056852   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ebaff1cb-29f0-4eeb-9273-fec85a62af79] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc0023184c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021347e0 TLS:<nil>}
I0422 17:11:51.056915   28378 retry.go:31] will retry after 141.671µs: Temporary Error: unexpected response code: 503
I0422 17:11:51.113653   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[660e5a3c-2632-4a86-ade9-e33982111071] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc00208b680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021ffd40 TLS:<nil>}
I0422 17:11:51.113748   28378 retry.go:31] will retry after 280.235µs: Temporary Error: unexpected response code: 503
I0422 17:11:51.122122   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4724f132-5dd3-46f1-8641-40f5909a0fc1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0020b1e60 TLS:<nil>}
I0422 17:11:51.122177   28378 retry.go:31] will retry after 264.978µs: Temporary Error: unexpected response code: 503
I0422 17:11:51.128051   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[66f35112-a713-4a3a-9730-a7e4e4cb8fd3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc00208b780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002324000 TLS:<nil>}
I0422 17:11:51.128125   28378 retry.go:31] will retry after 675.261µs: Temporary Error: unexpected response code: 503
I0422 17:11:51.143350   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2110992-5bcf-45a2-88be-a3568c941829] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c2120 TLS:<nil>}
I0422 17:11:51.143409   28378 retry.go:31] will retry after 509.574µs: Temporary Error: unexpected response code: 503
I0422 17:11:51.150269   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[11d3fde1-44eb-49ca-948f-815f7df6c015] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc00208b8c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002324240 TLS:<nil>}
I0422 17:11:51.150339   28378 retry.go:31] will retry after 1.664559ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.157770   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03a60ba8-a386-4676-9abe-4227d9d24929] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c2360 TLS:<nil>}
I0422 17:11:51.157831   28378 retry.go:31] will retry after 1.037055ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.167510   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[830ad634-6cd5-4e8e-a6b2-b42e52b997b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002125540 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002324480 TLS:<nil>}
I0422 17:11:51.167601   28378 retry.go:31] will retry after 2.732953ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.176818   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69ccc123-6fa1-4e7c-900e-8d2fbc5e800a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002125680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002134a20 TLS:<nil>}
I0422 17:11:51.176903   28378 retry.go:31] will retry after 5.183856ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.188172   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b5d8b48d-2727-48bf-a989-d083e317b70e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc0023189c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002134c60 TLS:<nil>}
I0422 17:11:51.188253   28378 retry.go:31] will retry after 3.282578ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.198333   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ed98222-c221-4953-88cc-d427a7f5f2fc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023247e0 TLS:<nil>}
I0422 17:11:51.198403   28378 retry.go:31] will retry after 11.492064ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.218688   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d4f4697d-44e1-4c0b-9807-fab243501395] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002324a20 TLS:<nil>}
I0422 17:11:51.218773   28378 retry.go:31] will retry after 15.608594ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.241161   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bc320ce-b39d-4807-9724-56e9ca0c495d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002125780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002324c60 TLS:<nil>}
I0422 17:11:51.241232   28378 retry.go:31] will retry after 19.622514ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.264525   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36d60558-2ee9-44e0-9300-b27f1c64e3fd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002134ea0 TLS:<nil>}
I0422 17:11:51.264615   28378 retry.go:31] will retry after 43.025344ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.311181   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e02f7748-0a01-4512-8616-00fdf6d1dc2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002324ea0 TLS:<nil>}
I0422 17:11:51.311276   28378 retry.go:31] will retry after 56.701079ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.372055   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ab70bac7-613d-4247-ab03-d1140ff0b1ea] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002318fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023250e0 TLS:<nil>}
I0422 17:11:51.372129   28378 retry.go:31] will retry after 88.165815ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.463401   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d18c558-d2fe-4caa-bef9-fbe86ba42d04] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc0023190c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002325320 TLS:<nil>}
I0422 17:11:51.463469   28378 retry.go:31] will retry after 98.244286ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.565724   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2700ace3-d604-4b58-951e-ccbb2648fc7a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002125980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002325560 TLS:<nil>}
I0422 17:11:51.565784   28378 retry.go:31] will retry after 178.991968ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.748169   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4016745e-fd0c-40da-acf0-45ff7516ec58] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc0023191c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021350e0 TLS:<nil>}
I0422 17:11:51.748228   28378 retry.go:31] will retry after 225.335804ms: Temporary Error: unexpected response code: 503
I0422 17:11:51.977774   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9d16d5f0-cf22-40ac-87bd-6e4c014330bc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:51 GMT]] Body:0xc002125ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023257a0 TLS:<nil>}
I0422 17:11:51.977838   28378 retry.go:31] will retry after 330.110239ms: Temporary Error: unexpected response code: 503
I0422 17:11:52.311347   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a947e422-d7c9-417e-b405-d6646633fdad] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:52 GMT]] Body:0xc002319300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002135320 TLS:<nil>}
I0422 17:11:52.311422   28378 retry.go:31] will retry after 356.839638ms: Temporary Error: unexpected response code: 503
I0422 17:11:52.672331   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[049222ef-7960-47fe-97e5-d2297c8925b2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:52 GMT]] Body:0xc002319440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023259e0 TLS:<nil>}
I0422 17:11:52.672424   28378 retry.go:31] will retry after 892.16905ms: Temporary Error: unexpected response code: 503
I0422 17:11:53.569189   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c72f00f5-cddf-4caa-a493-9e66b5d455a8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:53 GMT]] Body:0xc002125bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002325c20 TLS:<nil>}
I0422 17:11:53.569264   28378 retry.go:31] will retry after 1.57610963s: Temporary Error: unexpected response code: 503
I0422 17:11:55.149579   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c6b2aa5a-b7b1-4110-81f3-b28ed8ec1085] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:55 GMT]] Body:0xc00208ba80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002135560 TLS:<nil>}
I0422 17:11:55.149645   28378 retry.go:31] will retry after 2.00511827s: Temporary Error: unexpected response code: 503
I0422 17:11:57.159237   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28ce0b89-51ee-4dd5-8b52-8f84a462e460] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:11:57 GMT]] Body:0xc0023195c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c25a0 TLS:<nil>}
I0422 17:11:57.159312   28378 retry.go:31] will retry after 3.174626319s: Temporary Error: unexpected response code: 503
I0422 17:12:00.337448   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[33678a56-8524-495d-b675-9dacb515f8db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:00 GMT]] Body:0xc002319640 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021357a0 TLS:<nil>}
I0422 17:12:00.337537   28378 retry.go:31] will retry after 5.513607947s: Temporary Error: unexpected response code: 503
I0422 17:12:05.855766   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7a76c3a7-569a-48e8-be20-901d731fe308] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:05 GMT]] Body:0xc00208bbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002325e60 TLS:<nil>}
I0422 17:12:05.855837   28378 retry.go:31] will retry after 5.975052895s: Temporary Error: unexpected response code: 503
I0422 17:12:11.834421   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f71016ed-1187-4c31-b950-715c04a70918] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:11 GMT]] Body:0xc002125dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c27e0 TLS:<nil>}
I0422 17:12:11.834483   28378 retry.go:31] will retry after 5.33346768s: Temporary Error: unexpected response code: 503
I0422 17:12:17.172587   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[82525351-3b5b-4262-9f6a-098f8f0279b6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:17 GMT]] Body:0xc002125e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c2a20 TLS:<nil>}
I0422 17:12:17.172651   28378 retry.go:31] will retry after 13.062813955s: Temporary Error: unexpected response code: 503
I0422 17:12:30.239874   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b4921ca2-18e6-4598-a44a-bccdd272a2f5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:30 GMT]] Body:0xc00208bd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0021359e0 TLS:<nil>}
I0422 17:12:30.239945   28378 retry.go:31] will retry after 11.660293257s: Temporary Error: unexpected response code: 503
I0422 17:12:41.904428   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2e7e8534-1f82-4ad9-be4e-c9163f86f9c3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:41 GMT]] Body:0xc002319780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c2c60 TLS:<nil>}
I0422 17:12:41.904505   28378 retry.go:31] will retry after 18.048829914s: Temporary Error: unexpected response code: 503
I0422 17:12:59.956610   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d71cb434-406e-48ba-a55e-942e889c84dd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:12:59 GMT]] Body:0xc0023198c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002480120 TLS:<nil>}
I0422 17:12:59.956666   28378 retry.go:31] will retry after 23.087064896s: Temporary Error: unexpected response code: 503
I0422 17:13:23.048908   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b968349b-9e3c-46de-8616-b2ad81e9900a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:13:23 GMT]] Body:0xc002319940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002135c20 TLS:<nil>}
I0422 17:13:23.048965   28378 retry.go:31] will retry after 1m18.153195016s: Temporary Error: unexpected response code: 503
I0422 17:14:41.206865   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[52035792-5995-4102-8263-6e62d76919d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:14:41 GMT]] Body:0xc00206e040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0023c2ea0 TLS:<nil>}
I0422 17:14:41.206933   28378 retry.go:31] will retry after 1m0.009326747s: Temporary Error: unexpected response code: 503
I0422 17:15:41.221004   28378 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[786e6251-f959-4b66-80f9-9ab6f80a3bff] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 22 Apr 2024 17:15:41 GMT]] Body:0xc00208a240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc002480360 TLS:<nil>}
I0422 17:15:41.221073   28378 retry.go:31] will retry after 1m18.455622046s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-005894 -n functional-005894
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 logs -n 25: (1.509624616s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-005894 ssh stat                                               | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh stat                                               | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh sudo                                               | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh findmnt                                            | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-005894                                                     | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port1009883609/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh findmnt                                            | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh -- ls                                              | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh sudo                                               | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-005894                                                     | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount3    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-005894                                                     | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount1    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-005894                                                     | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount2    |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh findmnt                                            | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh findmnt                                            | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh findmnt                                            | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-005894                                                     | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	| image          | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-005894 ssh pgrep                                              | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-005894 image build -t                                         | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:12 UTC |
	|                | localhost/my-image:functional-005894                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| image          | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| update-context | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC | 22 Apr 24 17:11 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-005894                                                        | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:11 UTC |                     |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-005894 image ls                                               | functional-005894 | jenkins | v1.33.0 | 22 Apr 24 17:12 UTC | 22 Apr 24 17:12 UTC |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:11:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:11:48.971992   28350 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:11:48.972263   28350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:11:48.972273   28350 out.go:304] Setting ErrFile to fd 2...
	I0422 17:11:48.972277   28350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:11:48.972540   28350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:11:48.973039   28350 out.go:298] Setting JSON to false
	I0422 17:11:48.973851   28350 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3254,"bootTime":1713802655,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:11:48.973913   28350 start.go:139] virtualization: kvm guest
	I0422 17:11:48.975969   28350 out.go:177] * [functional-005894] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0422 17:11:48.977674   28350 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:11:48.977693   28350 notify.go:220] Checking for updates...
	I0422 17:11:48.979333   28350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:11:48.980981   28350 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:11:48.982763   28350 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:11:48.984518   28350 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:11:48.986223   28350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:11:48.988430   28350 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:11:48.988823   28350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:11:48.988892   28350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:11:49.003790   28350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0422 17:11:49.004209   28350 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:11:49.004710   28350 main.go:141] libmachine: Using API Version  1
	I0422 17:11:49.004731   28350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:11:49.005085   28350 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:11:49.005397   28350 main.go:141] libmachine: (functional-005894) Calling .DriverName
	I0422 17:11:49.005726   28350 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:11:49.006182   28350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:11:49.006235   28350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:11:49.020844   28350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0422 17:11:49.021225   28350 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:11:49.021736   28350 main.go:141] libmachine: Using API Version  1
	I0422 17:11:49.021758   28350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:11:49.022101   28350 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:11:49.022366   28350 main.go:141] libmachine: (functional-005894) Calling .DriverName
	I0422 17:11:49.056477   28350 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0422 17:11:49.058462   28350 start.go:297] selected driver: kvm2
	I0422 17:11:49.058480   28350 start.go:901] validating driver "kvm2" against &{Name:functional-005894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-005894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:11:49.058583   28350 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:11:49.061055   28350 out.go:177] 
	W0422 17:11:49.062591   28350 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0422 17:11:49.064030   28350 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.908514086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806209908488192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:279087,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=011d28ec-28ee-4327-901d-03576a01ac2e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.909135961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3efc110e-a65c-4592-a296-253807f2b59f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.909305750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3efc110e-a65c-4592-a296-253807f2b59f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.909695349Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de9de96fedeee5b896a8cc3c2e1bf5c604ce56f9cfd47a260c7af644692cf73d,PodSandboxId:fbd5317c27e85e92992ad01ddb9986339d0600be5367423202135aeb85c58631,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713805915492776804,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-jwql6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7b69bbae-5895-4dca-9eb7-c8afc6cea5e4,},Annotations:map[string]string{io.kube
rnetes.container.hash: c6f3e51b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46db6b7fb30073a653b854302fcc741470e765f6d0b410ce1d2b597ae56907a,PodSandboxId:a38dcb7519b479f18a97d64e1284eb3c9096388594ba2cbeaca0088b1ee37ed5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713805911065817716,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3932b2dd-8d06
-4b5d-9e73-095ba4c6d64f,},Annotations:map[string]string{io.kubernetes.container.hash: 6aaeff60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd453f5d7b98aaa7ce9c8ca3ab8f161b9343b6786ca7dc8ac9d2a0ef0e8c6b71,PodSandboxId:4f0a23613ffa76b41afc1f100f055f65a21b5be1b2171840ac9c3be4e34e38a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713805910118561554,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ae111ac-f8b1-4
792-8c80-2c8ccb77c33a,},Annotations:map[string]string{io.kubernetes.container.hash: ea1379e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279fed73b72068e67a98b22ad10c689f1312d4c4cbec3aeb3c07a2348802789,PodSandboxId:5726c262789f33dff84b0ef450b12621582e8dbd1ae5a76da12cf097237828cb,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713805907066412800,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-lcmld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48333415-f3ca-4489-892e-a
86bf7ed8474,},Annotations:map[string]string{io.kubernetes.container.hash: c18b52d2,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80e0d9388c35f29bfe863e79e000af36a50c3e97c6ce0506465796c06bd1985,PodSandboxId:f023f9005bd554e022bd866a50caefa0706d7d802c4dde5f0da69bf0acc03e9a,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883212950603,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-n
ode-connect-57b4589c47-m29cx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f353c370-a03d-401b-8c2b-e58286e79a22,},Annotations:map[string]string{io.kubernetes.container.hash: 1a053d51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd708e9c00c6d0ca42bd5cb6a6dbbafe2265ea54d18214efecd8b17f11b65e3,PodSandboxId:0d19bf59b765bd9cee36e0a3243f9ce7c1b3651b824fac82ed3e51b303c26c35,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883096117295,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.p
od.name: hello-node-6d85cfcfd8-nk8tv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3f1a142-5e35-4a05-b4aa-7646655fe789,},Annotations:map[string]string{io.kubernetes.container.hash: 58edb7a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3fe5889f6fb8ed05426fb941e0799c14d98148b6f1f7227900ec847382a8f3,PodSandboxId:ee12094f49ab9e7e72eeb4e1f3a9ce6b0e1f3c059df016ac3995a19c584ae22a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805856223408271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9
l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94c4182eef03f2e2973e62ce6a5cc93ba4746b987397bfa4b50059e2a5972fa,PodSandboxId:88f53abc014603c4122bf7973005fc30d6a7d80245ce30dbfa6de1b8186dda37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805855906596910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfa4d60dfeeb66243d1b11a212385e35c2a225b08affab540f882045842d0a7,PodSandboxId:89ef347254447d5469d435860e49d5a3694f8d0771ecfdebf9178aee805b366c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf37
06746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805855909610351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47843a3d6896c387ecd545e594f6c3bd931940ee8cb798b9b2da6f1e1073d444,PodSandboxId:f1340c44d4f33493005869330f5d426ceeb71fe6b9d59b60249614c24f8f4e3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:C
ONTAINER_RUNNING,CreatedAt:1713805851068648628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60535f15a193ffaad8399ffba96877f6bfb0d3b9c74eb04adcd434974437dd70,PodSandboxId:8957b1dcda0fd26656a9c1f85f86b95c6c8777f2d3504fe7b347d1209f4ff376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Create
dAt:1713805851093430212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85e5f46a92a21564ef71f57b18679fcfcbc49026fedc10f67800de8f0654afb,PodSandboxId:8237e87193050e5f9387b42bf11a251ac1169273d493e7f9f2bc8e067aa5e1c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805851026997302,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133207f2bc757d5937bd9c2d61b2f70b77be7f0fea1edc74d0b05e37188ba30a,PodSandboxId:81e7a024029b050993bcf00b7ca5852632042489621fbd8df79109c539e00e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171380585094386
6184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd4ddc9b69a76cb2621ca1c6374d968,},Annotations:map[string]string{io.kubernetes.container.hash: 4203e144,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7970034dd7c4a7d95a14999b595cf0e4552f130cec8c47d0b3210a2ddcd05f06,PodSandboxId:8e7701c22bd49fcd5730a65148cc401ba87098495fab5f058c7cb9b6a016ce75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713805814388799718,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802caa6d41407ccf92dd8eb13162b055d5a554f3a9547106b78f46dd0e1334f4,PodSandboxId:0bfa1033f9264e6aa78738d7a86865b5324c23f5d1cb83731f366bea41eccf35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713805814146510831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0c620dea2697227c3ec50e3231e741c2212a97fffe9ebab44a17df8be46e69,PodSandboxId:d7afe7d20310635673cc42718011cc0f23db6976d249f1549598b504e8db96d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713805814077713115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25200a6415faa4ac198c1859ec661ab6adf9f1808475fbcc67edfd9bd3591bb1,PodSandboxId:a03082a2f84da1700c531ea4b78fe9efcd29e0ef278fd178c04e7b12d8012db2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713805809259773902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c4797b94d282e4f4df7c26ea5c1a756f835d3ba2b236f9eebfa8dfe52d7b62,PodSandboxId:5c6a72c2cf240b23e1d6b13c92df796d41e100beb89f77b25b0b481b55df488f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861
cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713805809226985483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01ed3a7ec1af161d1cb042e9a400bc289f50c169113e9824453edca746bb853,PodSandboxId:e21338d1238c78b9ee374a826fc0f04969b44d2650de432a844e3bda5dcb8a9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c9
0f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713805809250476309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3efc110e-a65c-4592-a296-253807f2b59f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.950725986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=323eed0d-b672-4495-bd73-50283a9748ea name=/runtime.v1.RuntimeService/Version
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.950823658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=323eed0d-b672-4495-bd73-50283a9748ea name=/runtime.v1.RuntimeService/Version
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.952080484Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f4c2c06-fcf6-452f-9644-d5f1c95948cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.953047014Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806209953019732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:279087,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f4c2c06-fcf6-452f-9644-d5f1c95948cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.953838113Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94f3ed85-c168-4166-bb15-8f47963d1f2e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.953920062Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94f3ed85-c168-4166-bb15-8f47963d1f2e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.954489215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de9de96fedeee5b896a8cc3c2e1bf5c604ce56f9cfd47a260c7af644692cf73d,PodSandboxId:fbd5317c27e85e92992ad01ddb9986339d0600be5367423202135aeb85c58631,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713805915492776804,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-jwql6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7b69bbae-5895-4dca-9eb7-c8afc6cea5e4,},Annotations:map[string]string{io.kube
rnetes.container.hash: c6f3e51b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46db6b7fb30073a653b854302fcc741470e765f6d0b410ce1d2b597ae56907a,PodSandboxId:a38dcb7519b479f18a97d64e1284eb3c9096388594ba2cbeaca0088b1ee37ed5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713805911065817716,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3932b2dd-8d06
-4b5d-9e73-095ba4c6d64f,},Annotations:map[string]string{io.kubernetes.container.hash: 6aaeff60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd453f5d7b98aaa7ce9c8ca3ab8f161b9343b6786ca7dc8ac9d2a0ef0e8c6b71,PodSandboxId:4f0a23613ffa76b41afc1f100f055f65a21b5be1b2171840ac9c3be4e34e38a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713805910118561554,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ae111ac-f8b1-4
792-8c80-2c8ccb77c33a,},Annotations:map[string]string{io.kubernetes.container.hash: ea1379e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279fed73b72068e67a98b22ad10c689f1312d4c4cbec3aeb3c07a2348802789,PodSandboxId:5726c262789f33dff84b0ef450b12621582e8dbd1ae5a76da12cf097237828cb,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713805907066412800,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-lcmld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48333415-f3ca-4489-892e-a
86bf7ed8474,},Annotations:map[string]string{io.kubernetes.container.hash: c18b52d2,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80e0d9388c35f29bfe863e79e000af36a50c3e97c6ce0506465796c06bd1985,PodSandboxId:f023f9005bd554e022bd866a50caefa0706d7d802c4dde5f0da69bf0acc03e9a,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883212950603,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-n
ode-connect-57b4589c47-m29cx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f353c370-a03d-401b-8c2b-e58286e79a22,},Annotations:map[string]string{io.kubernetes.container.hash: 1a053d51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd708e9c00c6d0ca42bd5cb6a6dbbafe2265ea54d18214efecd8b17f11b65e3,PodSandboxId:0d19bf59b765bd9cee36e0a3243f9ce7c1b3651b824fac82ed3e51b303c26c35,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883096117295,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.p
od.name: hello-node-6d85cfcfd8-nk8tv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3f1a142-5e35-4a05-b4aa-7646655fe789,},Annotations:map[string]string{io.kubernetes.container.hash: 58edb7a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3fe5889f6fb8ed05426fb941e0799c14d98148b6f1f7227900ec847382a8f3,PodSandboxId:ee12094f49ab9e7e72eeb4e1f3a9ce6b0e1f3c059df016ac3995a19c584ae22a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805856223408271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9
l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94c4182eef03f2e2973e62ce6a5cc93ba4746b987397bfa4b50059e2a5972fa,PodSandboxId:88f53abc014603c4122bf7973005fc30d6a7d80245ce30dbfa6de1b8186dda37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805855906596910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfa4d60dfeeb66243d1b11a212385e35c2a225b08affab540f882045842d0a7,PodSandboxId:89ef347254447d5469d435860e49d5a3694f8d0771ecfdebf9178aee805b366c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf37
06746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805855909610351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47843a3d6896c387ecd545e594f6c3bd931940ee8cb798b9b2da6f1e1073d444,PodSandboxId:f1340c44d4f33493005869330f5d426ceeb71fe6b9d59b60249614c24f8f4e3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:C
ONTAINER_RUNNING,CreatedAt:1713805851068648628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60535f15a193ffaad8399ffba96877f6bfb0d3b9c74eb04adcd434974437dd70,PodSandboxId:8957b1dcda0fd26656a9c1f85f86b95c6c8777f2d3504fe7b347d1209f4ff376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Create
dAt:1713805851093430212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85e5f46a92a21564ef71f57b18679fcfcbc49026fedc10f67800de8f0654afb,PodSandboxId:8237e87193050e5f9387b42bf11a251ac1169273d493e7f9f2bc8e067aa5e1c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805851026997302,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133207f2bc757d5937bd9c2d61b2f70b77be7f0fea1edc74d0b05e37188ba30a,PodSandboxId:81e7a024029b050993bcf00b7ca5852632042489621fbd8df79109c539e00e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171380585094386
6184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd4ddc9b69a76cb2621ca1c6374d968,},Annotations:map[string]string{io.kubernetes.container.hash: 4203e144,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7970034dd7c4a7d95a14999b595cf0e4552f130cec8c47d0b3210a2ddcd05f06,PodSandboxId:8e7701c22bd49fcd5730a65148cc401ba87098495fab5f058c7cb9b6a016ce75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713805814388799718,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802caa6d41407ccf92dd8eb13162b055d5a554f3a9547106b78f46dd0e1334f4,PodSandboxId:0bfa1033f9264e6aa78738d7a86865b5324c23f5d1cb83731f366bea41eccf35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713805814146510831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0c620dea2697227c3ec50e3231e741c2212a97fffe9ebab44a17df8be46e69,PodSandboxId:d7afe7d20310635673cc42718011cc0f23db6976d249f1549598b504e8db96d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713805814077713115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25200a6415faa4ac198c1859ec661ab6adf9f1808475fbcc67edfd9bd3591bb1,PodSandboxId:a03082a2f84da1700c531ea4b78fe9efcd29e0ef278fd178c04e7b12d8012db2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713805809259773902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c4797b94d282e4f4df7c26ea5c1a756f835d3ba2b236f9eebfa8dfe52d7b62,PodSandboxId:5c6a72c2cf240b23e1d6b13c92df796d41e100beb89f77b25b0b481b55df488f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861
cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713805809226985483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01ed3a7ec1af161d1cb042e9a400bc289f50c169113e9824453edca746bb853,PodSandboxId:e21338d1238c78b9ee374a826fc0f04969b44d2650de432a844e3bda5dcb8a9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c9
0f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713805809250476309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94f3ed85-c168-4166-bb15-8f47963d1f2e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.990318263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e7e95eef-091a-46d2-9888-aa4c92f3f673 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.990410225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e7e95eef-091a-46d2-9888-aa4c92f3f673 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.991996271Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9089d55-477d-45ba-8eee-e718093b1106 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.992912913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806209992889210,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:279087,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9089d55-477d-45ba-8eee-e718093b1106 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.993663402Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=600b70e9-ee77-41aa-907a-75662b026872 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.993739252Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=600b70e9-ee77-41aa-907a-75662b026872 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:49 functional-005894 crio[4622]: time="2024-04-22 17:16:49.994106661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de9de96fedeee5b896a8cc3c2e1bf5c604ce56f9cfd47a260c7af644692cf73d,PodSandboxId:fbd5317c27e85e92992ad01ddb9986339d0600be5367423202135aeb85c58631,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713805915492776804,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-jwql6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7b69bbae-5895-4dca-9eb7-c8afc6cea5e4,},Annotations:map[string]string{io.kube
rnetes.container.hash: c6f3e51b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46db6b7fb30073a653b854302fcc741470e765f6d0b410ce1d2b597ae56907a,PodSandboxId:a38dcb7519b479f18a97d64e1284eb3c9096388594ba2cbeaca0088b1ee37ed5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713805911065817716,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3932b2dd-8d06
-4b5d-9e73-095ba4c6d64f,},Annotations:map[string]string{io.kubernetes.container.hash: 6aaeff60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd453f5d7b98aaa7ce9c8ca3ab8f161b9343b6786ca7dc8ac9d2a0ef0e8c6b71,PodSandboxId:4f0a23613ffa76b41afc1f100f055f65a21b5be1b2171840ac9c3be4e34e38a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713805910118561554,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ae111ac-f8b1-4
792-8c80-2c8ccb77c33a,},Annotations:map[string]string{io.kubernetes.container.hash: ea1379e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279fed73b72068e67a98b22ad10c689f1312d4c4cbec3aeb3c07a2348802789,PodSandboxId:5726c262789f33dff84b0ef450b12621582e8dbd1ae5a76da12cf097237828cb,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713805907066412800,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-lcmld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48333415-f3ca-4489-892e-a
86bf7ed8474,},Annotations:map[string]string{io.kubernetes.container.hash: c18b52d2,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80e0d9388c35f29bfe863e79e000af36a50c3e97c6ce0506465796c06bd1985,PodSandboxId:f023f9005bd554e022bd866a50caefa0706d7d802c4dde5f0da69bf0acc03e9a,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883212950603,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-n
ode-connect-57b4589c47-m29cx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f353c370-a03d-401b-8c2b-e58286e79a22,},Annotations:map[string]string{io.kubernetes.container.hash: 1a053d51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd708e9c00c6d0ca42bd5cb6a6dbbafe2265ea54d18214efecd8b17f11b65e3,PodSandboxId:0d19bf59b765bd9cee36e0a3243f9ce7c1b3651b824fac82ed3e51b303c26c35,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883096117295,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.p
od.name: hello-node-6d85cfcfd8-nk8tv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3f1a142-5e35-4a05-b4aa-7646655fe789,},Annotations:map[string]string{io.kubernetes.container.hash: 58edb7a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3fe5889f6fb8ed05426fb941e0799c14d98148b6f1f7227900ec847382a8f3,PodSandboxId:ee12094f49ab9e7e72eeb4e1f3a9ce6b0e1f3c059df016ac3995a19c584ae22a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805856223408271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9
l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94c4182eef03f2e2973e62ce6a5cc93ba4746b987397bfa4b50059e2a5972fa,PodSandboxId:88f53abc014603c4122bf7973005fc30d6a7d80245ce30dbfa6de1b8186dda37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805855906596910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfa4d60dfeeb66243d1b11a212385e35c2a225b08affab540f882045842d0a7,PodSandboxId:89ef347254447d5469d435860e49d5a3694f8d0771ecfdebf9178aee805b366c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf37
06746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805855909610351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47843a3d6896c387ecd545e594f6c3bd931940ee8cb798b9b2da6f1e1073d444,PodSandboxId:f1340c44d4f33493005869330f5d426ceeb71fe6b9d59b60249614c24f8f4e3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:C
ONTAINER_RUNNING,CreatedAt:1713805851068648628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60535f15a193ffaad8399ffba96877f6bfb0d3b9c74eb04adcd434974437dd70,PodSandboxId:8957b1dcda0fd26656a9c1f85f86b95c6c8777f2d3504fe7b347d1209f4ff376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Create
dAt:1713805851093430212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85e5f46a92a21564ef71f57b18679fcfcbc49026fedc10f67800de8f0654afb,PodSandboxId:8237e87193050e5f9387b42bf11a251ac1169273d493e7f9f2bc8e067aa5e1c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805851026997302,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133207f2bc757d5937bd9c2d61b2f70b77be7f0fea1edc74d0b05e37188ba30a,PodSandboxId:81e7a024029b050993bcf00b7ca5852632042489621fbd8df79109c539e00e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171380585094386
6184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd4ddc9b69a76cb2621ca1c6374d968,},Annotations:map[string]string{io.kubernetes.container.hash: 4203e144,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7970034dd7c4a7d95a14999b595cf0e4552f130cec8c47d0b3210a2ddcd05f06,PodSandboxId:8e7701c22bd49fcd5730a65148cc401ba87098495fab5f058c7cb9b6a016ce75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713805814388799718,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802caa6d41407ccf92dd8eb13162b055d5a554f3a9547106b78f46dd0e1334f4,PodSandboxId:0bfa1033f9264e6aa78738d7a86865b5324c23f5d1cb83731f366bea41eccf35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713805814146510831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0c620dea2697227c3ec50e3231e741c2212a97fffe9ebab44a17df8be46e69,PodSandboxId:d7afe7d20310635673cc42718011cc0f23db6976d249f1549598b504e8db96d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713805814077713115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25200a6415faa4ac198c1859ec661ab6adf9f1808475fbcc67edfd9bd3591bb1,PodSandboxId:a03082a2f84da1700c531ea4b78fe9efcd29e0ef278fd178c04e7b12d8012db2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713805809259773902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c4797b94d282e4f4df7c26ea5c1a756f835d3ba2b236f9eebfa8dfe52d7b62,PodSandboxId:5c6a72c2cf240b23e1d6b13c92df796d41e100beb89f77b25b0b481b55df488f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861
cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713805809226985483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01ed3a7ec1af161d1cb042e9a400bc289f50c169113e9824453edca746bb853,PodSandboxId:e21338d1238c78b9ee374a826fc0f04969b44d2650de432a844e3bda5dcb8a9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c9
0f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713805809250476309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=600b70e9-ee77-41aa-907a-75662b026872 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.028698041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad7a78bd-737a-4eae-a43e-c3424394ee18 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.028797650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad7a78bd-737a-4eae-a43e-c3424394ee18 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.031249156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=386a15ee-6fc0-4c2b-b5c9-0975e78d4890 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.031974976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806210031953262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:279087,},InodesUsed:&UInt64Value{Value:127,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=386a15ee-6fc0-4c2b-b5c9-0975e78d4890 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.032871244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=875f8f33-bd7f-449b-9387-813ccec39bdf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.032947241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=875f8f33-bd7f-449b-9387-813ccec39bdf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:16:50 functional-005894 crio[4622]: time="2024-04-22 17:16:50.033416532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de9de96fedeee5b896a8cc3c2e1bf5c604ce56f9cfd47a260c7af644692cf73d,PodSandboxId:fbd5317c27e85e92992ad01ddb9986339d0600be5367423202135aeb85c58631,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1713805915492776804,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-jwql6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7b69bbae-5895-4dca-9eb7-c8afc6cea5e4,},Annotations:map[string]string{io.kube
rnetes.container.hash: c6f3e51b,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46db6b7fb30073a653b854302fcc741470e765f6d0b410ce1d2b597ae56907a,PodSandboxId:a38dcb7519b479f18a97d64e1284eb3c9096388594ba2cbeaca0088b1ee37ed5,Metadata:&ContainerMetadata{Name:myfrontend,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580,State:CONTAINER_RUNNING,CreatedAt:1713805911065817716,Labels:map[string]string{io.kubernetes.container.name: myfrontend,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3932b2dd-8d06
-4b5d-9e73-095ba4c6d64f,},Annotations:map[string]string{io.kubernetes.container.hash: 6aaeff60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd453f5d7b98aaa7ce9c8ca3ab8f161b9343b6786ca7dc8ac9d2a0ef0e8c6b71,PodSandboxId:4f0a23613ffa76b41afc1f100f055f65a21b5be1b2171840ac9c3be4e34e38a2,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1713805910118561554,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ae111ac-f8b1-4
792-8c80-2c8ccb77c33a,},Annotations:map[string]string{io.kubernetes.container.hash: ea1379e4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a279fed73b72068e67a98b22ad10c689f1312d4c4cbec3aeb3c07a2348802789,PodSandboxId:5726c262789f33dff84b0ef450b12621582e8dbd1ae5a76da12cf097237828cb,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1713805907066412800,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-lcmld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 48333415-f3ca-4489-892e-a
86bf7ed8474,},Annotations:map[string]string{io.kubernetes.container.hash: c18b52d2,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d80e0d9388c35f29bfe863e79e000af36a50c3e97c6ce0506465796c06bd1985,PodSandboxId:f023f9005bd554e022bd866a50caefa0706d7d802c4dde5f0da69bf0acc03e9a,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883212950603,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-n
ode-connect-57b4589c47-m29cx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f353c370-a03d-401b-8c2b-e58286e79a22,},Annotations:map[string]string{io.kubernetes.container.hash: 1a053d51,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7dd708e9c00c6d0ca42bd5cb6a6dbbafe2265ea54d18214efecd8b17f11b65e3,PodSandboxId:0d19bf59b765bd9cee36e0a3243f9ce7c1b3651b824fac82ed3e51b303c26c35,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1713805883096117295,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.p
od.name: hello-node-6d85cfcfd8-nk8tv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b3f1a142-5e35-4a05-b4aa-7646655fe789,},Annotations:map[string]string{io.kubernetes.container.hash: 58edb7a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3fe5889f6fb8ed05426fb941e0799c14d98148b6f1f7227900ec847382a8f3,PodSandboxId:ee12094f49ab9e7e72eeb4e1f3a9ce6b0e1f3c059df016ac3995a19c584ae22a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713805856223408271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9
l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d94c4182eef03f2e2973e62ce6a5cc93ba4746b987397bfa4b50059e2a5972fa,PodSandboxId:88f53abc014603c4122bf7973005fc30d6a7d80245ce30dbfa6de1b8186dda37,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713805855906596910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cfa4d60dfeeb66243d1b11a212385e35c2a225b08affab540f882045842d0a7,PodSandboxId:89ef347254447d5469d435860e49d5a3694f8d0771ecfdebf9178aee805b366c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf37
06746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713805855909610351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47843a3d6896c387ecd545e594f6c3bd931940ee8cb798b9b2da6f1e1073d444,PodSandboxId:f1340c44d4f33493005869330f5d426ceeb71fe6b9d59b60249614c24f8f4e3c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:C
ONTAINER_RUNNING,CreatedAt:1713805851068648628,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60535f15a193ffaad8399ffba96877f6bfb0d3b9c74eb04adcd434974437dd70,PodSandboxId:8957b1dcda0fd26656a9c1f85f86b95c6c8777f2d3504fe7b347d1209f4ff376,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,Create
dAt:1713805851093430212,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f85e5f46a92a21564ef71f57b18679fcfcbc49026fedc10f67800de8f0654afb,PodSandboxId:8237e87193050e5f9387b42bf11a251ac1169273d493e7f9f2bc8e067aa5e1c6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713805851026997302,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133207f2bc757d5937bd9c2d61b2f70b77be7f0fea1edc74d0b05e37188ba30a,PodSandboxId:81e7a024029b050993bcf00b7ca5852632042489621fbd8df79109c539e00e8b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:171380585094386
6184,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fd4ddc9b69a76cb2621ca1c6374d968,},Annotations:map[string]string{io.kubernetes.container.hash: 4203e144,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7970034dd7c4a7d95a14999b595cf0e4552f130cec8c47d0b3210a2ddcd05f06,PodSandboxId:8e7701c22bd49fcd5730a65148cc401ba87098495fab5f058c7cb9b6a016ce75,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713805814388799718,Labels:map[strin
g]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wtn9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30def449-40aa-42ce-bfa8-1bb2acaf7a1b,},Annotations:map[string]string{io.kubernetes.container.hash: f9fbe52e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802caa6d41407ccf92dd8eb13162b055d5a554f3a9547106b78f46dd0e1334f4,PodSandboxId:0bfa1033f9264e6aa78738d7a86865b5324c23f5d1cb83731f366bea41eccf35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[
string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713805814146510831,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lv4fw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd6835dd-1e8c-4609-a7c0-959a6f228e34,},Annotations:map[string]string{io.kubernetes.container.hash: ab86ebc8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea0c620dea2697227c3ec50e3231e741c2212a97fffe9ebab44a17df8be46e69,PodSandboxId:d7afe7d20310635673cc42718011cc0f23db6976d249f1549598b504e8db96d2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713805814077713115,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c46927a2-67d8-4562-a268-3a71a9c364c8,},Annotations:map[string]string{io.kubernetes.container.hash: eca37196,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25200a6415faa4ac198c1859ec661ab6adf9f1808475fbcc67edfd9bd3591bb1,PodSandboxId:a03082a2f84da1700c531ea4b78fe9efcd29e0ef278fd178c04e7b12d8012db2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713805809259773902,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98da975dde401551feac4642b0441aa3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63c4797b94d282e4f4df7c26ea5c1a756f835d3ba2b236f9eebfa8dfe52d7b62,PodSandboxId:5c6a72c2cf240b23e1d6b13c92df796d41e100beb89f77b25b0b481b55df488f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861
cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713805809226985483,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 471c56e0d93f13fdd53344077c2c277c,},Annotations:map[string]string{io.kubernetes.container.hash: 60724175,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d01ed3a7ec1af161d1cb042e9a400bc289f50c169113e9824453edca746bb853,PodSandboxId:e21338d1238c78b9ee374a826fc0f04969b44d2650de432a844e3bda5dcb8a9c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c9
0f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713805809250476309,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-005894,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5eafb80a5167735b5ba9a4cd260c19d,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=875f8f33-bd7f-449b-9387-813ccec39bdf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	de9de96fedeee       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   4 minutes ago       Running             dashboard-metrics-scraper   0                   fbd5317c27e85       dashboard-metrics-scraper-b5fc48f67-jwql6
	d46db6b7fb300       docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419                  4 minutes ago       Running             myfrontend                  0                   a38dcb7519b47       sp-pod
	bd453f5d7b98a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              4 minutes ago       Exited              mount-munger                0                   4f0a23613ffa7       busybox-mount
	a279fed73b720       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  5 minutes ago       Running             mysql                       0                   5726c262789f3       mysql-64454c8b5c-lcmld
	d80e0d9388c35       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               5 minutes ago       Running             echoserver                  0                   f023f9005bd55       hello-node-connect-57b4589c47-m29cx
	7dd708e9c00c6       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               5 minutes ago       Running             echoserver                  0                   0d19bf59b765b       hello-node-6d85cfcfd8-nk8tv
	cf3fe5889f6fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 5 minutes ago       Running             coredns                     2                   ee12094f49ab9       coredns-7db6d8ff4d-wtn9l
	1cfa4d60dfeeb       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                 5 minutes ago       Running             kube-proxy                  2                   89ef347254447       kube-proxy-lv4fw
	d94c4182eef03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 5 minutes ago       Running             storage-provisioner         2                   88f53abc01460       storage-provisioner
	60535f15a193f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 5 minutes ago       Running             etcd                        2                   8957b1dcda0fd       etcd-functional-005894
	47843a3d6896c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                 5 minutes ago       Running             kube-scheduler              2                   f1340c44d4f33       kube-scheduler-functional-005894
	f85e5f46a92a2       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                 5 minutes ago       Running             kube-controller-manager     2                   8237e87193050       kube-controller-manager-functional-005894
	133207f2bc757       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                                 5 minutes ago       Running             kube-apiserver              0                   81e7a024029b0       kube-apiserver-functional-005894
	7970034dd7c4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 6 minutes ago       Exited              coredns                     1                   8e7701c22bd49       coredns-7db6d8ff4d-wtn9l
	802caa6d41407       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                                 6 minutes ago       Exited              kube-proxy                  1                   0bfa1033f9264       kube-proxy-lv4fw
	ea0c620dea269       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Exited              storage-provisioner         1                   d7afe7d203106       storage-provisioner
	25200a6415faa       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                                 6 minutes ago       Exited              kube-scheduler              1                   a03082a2f84da       kube-scheduler-functional-005894
	d01ed3a7ec1af       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                                 6 minutes ago       Exited              kube-controller-manager     1                   e21338d1238c7       kube-controller-manager-functional-005894
	63c4797b94d28       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 6 minutes ago       Exited              etcd                        1                   5c6a72c2cf240       etcd-functional-005894
	
	
	==> coredns [7970034dd7c4a7d95a14999b595cf0e4552f130cec8c47d0b3210a2ddcd05f06] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51957 - 37434 "HINFO IN 1286078655410485311.8021261056584401872. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00972229s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cf3fe5889f6fb8ed05426fb941e0799c14d98148b6f1f7227900ec847382a8f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:58946 - 7136 "HINFO IN 5805594072987178872.2056656857874795779. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009864118s
	
	
	==> describe nodes <==
	Name:               functional-005894
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-005894
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=functional-005894
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_09_36_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:09:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-005894
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:16:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:12:27 +0000   Mon, 22 Apr 2024 17:09:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:12:27 +0000   Mon, 22 Apr 2024 17:09:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:12:27 +0000   Mon, 22 Apr 2024 17:09:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:12:27 +0000   Mon, 22 Apr 2024 17:09:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.154
	  Hostname:    functional-005894
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a817143037d4f5ba182a5e6bd6b73df
	  System UUID:                3a817143-037d-4f5b-a182-a5e6bd6b73df
	  Boot ID:                    0957eac3-1c7e-4f7b-af18-224a2f87d438
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-nk8tv                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  default                     hello-node-connect-57b4589c47-m29cx          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  default                     mysql-64454c8b5c-lcmld                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    5m20s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 coredns-7db6d8ff4d-wtn9l                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m2s
	  kube-system                 etcd-functional-005894                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-apiserver-functional-005894             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-controller-manager-functional-005894    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m15s
	  kube-system                 kube-proxy-lv4fw                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-scheduler-functional-005894             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m16s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-jwql6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-jccbh        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m59s                  kube-proxy       
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  7m15s                  kubelet          Node functional-005894 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  7m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m15s                  kubelet          Node functional-005894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s                  kubelet          Node functional-005894 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m15s                  kubelet          Starting kubelet.
	  Normal  NodeReady                7m14s                  kubelet          Node functional-005894 status is now: NodeReady
	  Normal  RegisteredNode           7m3s                   node-controller  Node functional-005894 event: Registered Node functional-005894 in Controller
	  Normal  NodeHasNoDiskPressure    6m42s (x8 over 6m42s)  kubelet          Node functional-005894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  6m42s (x8 over 6m42s)  kubelet          Node functional-005894 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m42s (x7 over 6m42s)  kubelet          Node functional-005894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m25s                  node-controller  Node functional-005894 event: Registered Node functional-005894 in Controller
	  Normal  Starting                 6m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet          Node functional-005894 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet          Node functional-005894 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)        kubelet          Node functional-005894 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m43s                  node-controller  Node functional-005894 event: Registered Node functional-005894 in Controller
	
	
	==> dmesg <==
	[  +3.712220] systemd-fstab-generator[2836]: Ignoring "noauto" option for root device
	[  +2.070205] systemd-fstab-generator[2959]: Ignoring "noauto" option for root device
	[  +0.082151] kauditd_printk_skb: 170 callbacks suppressed
	[  +5.454054] kauditd_printk_skb: 52 callbacks suppressed
	[ +12.328952] kauditd_printk_skb: 35 callbacks suppressed
	[  +0.976353] systemd-fstab-generator[3729]: Ignoring "noauto" option for root device
	[ +19.011784] systemd-fstab-generator[4542]: Ignoring "noauto" option for root device
	[  +0.075830] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.061510] systemd-fstab-generator[4554]: Ignoring "noauto" option for root device
	[  +0.169758] systemd-fstab-generator[4568]: Ignoring "noauto" option for root device
	[  +0.137418] systemd-fstab-generator[4580]: Ignoring "noauto" option for root device
	[  +0.270999] systemd-fstab-generator[4608]: Ignoring "noauto" option for root device
	[  +0.815791] systemd-fstab-generator[4733]: Ignoring "noauto" option for root device
	[  +2.396324] systemd-fstab-generator[4857]: Ignoring "noauto" option for root device
	[  +1.218070] kauditd_printk_skb: 200 callbacks suppressed
	[  +5.073705] kauditd_printk_skb: 38 callbacks suppressed
	[Apr22 17:11] systemd-fstab-generator[5731]: Ignoring "noauto" option for root device
	[  +6.692138] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.804026] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.804535] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.189258] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.945034] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.023228] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.021980] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.950238] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [60535f15a193ffaad8399ffba96877f6bfb0d3b9c74eb04adcd434974437dd70] <==
	{"level":"info","ts":"2024-04-22T17:11:34.255434Z","caller":"traceutil/trace.go:171","msg":"trace[1452685878] linearizableReadLoop","detail":"{readStateIndex:781; appliedIndex:780; }","duration":"183.812052ms","start":"2024-04-22T17:11:34.071607Z","end":"2024-04-22T17:11:34.255419Z","steps":["trace[1452685878] 'read index received'  (duration: 183.664421ms)","trace[1452685878] 'applied index is now lower than readState.Index'  (duration: 146.924µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:11:34.255643Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.012495ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11674"}
	{"level":"info","ts":"2024-04-22T17:11:34.25567Z","caller":"traceutil/trace.go:171","msg":"trace[1889382852] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:719; }","duration":"184.080308ms","start":"2024-04-22T17:11:34.071582Z","end":"2024-04-22T17:11:34.255663Z","steps":["trace[1889382852] 'agreement among raft nodes before linearized reading'  (duration: 183.918629ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:11:34.255968Z","caller":"traceutil/trace.go:171","msg":"trace[1545284683] transaction","detail":"{read_only:false; response_revision:719; number_of_response:1; }","duration":"220.516668ms","start":"2024-04-22T17:11:34.035441Z","end":"2024-04-22T17:11:34.255958Z","steps":["trace[1545284683] 'process raft request'  (duration: 219.848307ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:11:40.230854Z","caller":"traceutil/trace.go:171","msg":"trace[1178345188] linearizableReadLoop","detail":"{readStateIndex:793; appliedIndex:792; }","duration":"159.332366ms","start":"2024-04-22T17:11:40.071505Z","end":"2024-04-22T17:11:40.230838Z","steps":["trace[1178345188] 'read index received'  (duration: 159.188964ms)","trace[1178345188] 'applied index is now lower than readState.Index'  (duration: 143.082µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:11:40.231034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.504649ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14621"}
	{"level":"info","ts":"2024-04-22T17:11:40.231058Z","caller":"traceutil/trace.go:171","msg":"trace[1007401599] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:730; }","duration":"159.56909ms","start":"2024-04-22T17:11:40.071482Z","end":"2024-04-22T17:11:40.231051Z","steps":["trace[1007401599] 'agreement among raft nodes before linearized reading'  (duration: 159.416119ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:11:40.231397Z","caller":"traceutil/trace.go:171","msg":"trace[818805125] transaction","detail":"{read_only:false; response_revision:730; number_of_response:1; }","duration":"474.052859ms","start":"2024-04-22T17:11:39.757334Z","end":"2024-04-22T17:11:40.231387Z","steps":["trace[818805125] 'process raft request'  (duration: 473.405495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:11:40.231468Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T17:11:39.757319Z","time spent":"474.101196ms","remote":"127.0.0.1:37832","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3153,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:718 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:3116 >> failure:<request_range:<key:\"/registry/pods/default/sp-pod\" > >"}
	{"level":"info","ts":"2024-04-22T17:11:42.429534Z","caller":"traceutil/trace.go:171","msg":"trace[1612193705] linearizableReadLoop","detail":"{readStateIndex:798; appliedIndex:797; }","duration":"119.141434ms","start":"2024-04-22T17:11:42.310377Z","end":"2024-04-22T17:11:42.429518Z","steps":["trace[1612193705] 'read index received'  (duration: 118.948223ms)","trace[1612193705] 'applied index is now lower than readState.Index'  (duration: 192.741µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:11:42.429715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.317173ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-04-22T17:11:42.429765Z","caller":"traceutil/trace.go:171","msg":"trace[1256608374] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:734; }","duration":"119.40189ms","start":"2024-04-22T17:11:42.310353Z","end":"2024-04-22T17:11:42.429755Z","steps":["trace[1256608374] 'agreement among raft nodes before linearized reading'  (duration: 119.244286ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:11:42.429943Z","caller":"traceutil/trace.go:171","msg":"trace[326340206] transaction","detail":"{read_only:false; response_revision:734; number_of_response:1; }","duration":"296.356885ms","start":"2024-04-22T17:11:42.13357Z","end":"2024-04-22T17:11:42.429927Z","steps":["trace[326340206] 'process raft request'  (duration: 295.852394ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:11:45.003574Z","caller":"traceutil/trace.go:171","msg":"trace[1624066631] linearizableReadLoop","detail":"{readStateIndex:806; appliedIndex:805; }","duration":"487.050355ms","start":"2024-04-22T17:11:44.516479Z","end":"2024-04-22T17:11:45.00353Z","steps":["trace[1624066631] 'read index received'  (duration: 486.899517ms)","trace[1624066631] 'applied index is now lower than readState.Index'  (duration: 150.066µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:11:45.003866Z","caller":"traceutil/trace.go:171","msg":"trace[283169895] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"545.225401ms","start":"2024-04-22T17:11:44.458628Z","end":"2024-04-22T17:11:45.003853Z","steps":["trace[283169895] 'process raft request'  (duration: 544.792912ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:11:45.003951Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T17:11:44.45861Z","time spent":"545.285691ms","remote":"127.0.0.1:37820","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:735 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-04-22T17:11:45.004173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"487.685424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14536"}
	{"level":"info","ts":"2024-04-22T17:11:45.004252Z","caller":"traceutil/trace.go:171","msg":"trace[1338929549] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:742; }","duration":"487.741983ms","start":"2024-04-22T17:11:44.516447Z","end":"2024-04-22T17:11:45.004189Z","steps":["trace[1338929549] 'agreement among raft nodes before linearized reading'  (duration: 487.634515ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:11:45.004287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T17:11:44.516433Z","time spent":"487.847324ms","remote":"127.0.0.1:37832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":14559,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-04-22T17:11:45.00451Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"409.850651ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:5 size:14536"}
	{"level":"info","ts":"2024-04-22T17:11:45.004595Z","caller":"traceutil/trace.go:171","msg":"trace[2114515723] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:5; response_revision:742; }","duration":"409.95911ms","start":"2024-04-22T17:11:44.59463Z","end":"2024-04-22T17:11:45.004589Z","steps":["trace[2114515723] 'agreement among raft nodes before linearized reading'  (duration: 409.772852ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:11:45.004651Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T17:11:44.594617Z","time spent":"410.027741ms","remote":"127.0.0.1:37832","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":5,"response size":14559,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-04-22T17:11:48.370265Z","caller":"traceutil/trace.go:171","msg":"trace[1244821279] transaction","detail":"{read_only:false; response_revision:751; number_of_response:1; }","duration":"180.674447ms","start":"2024-04-22T17:11:48.189128Z","end":"2024-04-22T17:11:48.369803Z","steps":["trace[1244821279] 'process raft request'  (duration: 180.465686ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:11:55.277177Z","caller":"traceutil/trace.go:171","msg":"trace[370883998] transaction","detail":"{read_only:false; response_revision:827; number_of_response:1; }","duration":"126.999906ms","start":"2024-04-22T17:11:55.150155Z","end":"2024-04-22T17:11:55.277155Z","steps":["trace[370883998] 'process raft request'  (duration: 126.750101ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:12:33.757703Z","caller":"traceutil/trace.go:171","msg":"trace[1416387820] transaction","detail":"{read_only:false; response_revision:869; number_of_response:1; }","duration":"195.10053ms","start":"2024-04-22T17:12:33.562573Z","end":"2024-04-22T17:12:33.757674Z","steps":["trace[1416387820] 'process raft request'  (duration: 194.984302ms)"],"step_count":1}
	
	
	==> etcd [63c4797b94d282e4f4df7c26ea5c1a756f835d3ba2b236f9eebfa8dfe52d7b62] <==
	{"level":"info","ts":"2024-04-22T17:10:09.725647Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-04-22T17:10:11.506101Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T17:10:11.506279Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T17:10:11.506347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgPreVoteResp from 10fb7b0a157fc334 at term 2"}
	{"level":"info","ts":"2024-04-22T17:10:11.506383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T17:10:11.506407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 received MsgVoteResp from 10fb7b0a157fc334 at term 3"}
	{"level":"info","ts":"2024-04-22T17:10:11.506434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"10fb7b0a157fc334 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T17:10:11.506461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 10fb7b0a157fc334 elected leader 10fb7b0a157fc334 at term 3"}
	{"level":"info","ts":"2024-04-22T17:10:11.512739Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"10fb7b0a157fc334","local-member-attributes":"{Name:functional-005894 ClientURLs:[https://192.168.39.154:2379]}","request-path":"/0/members/10fb7b0a157fc334/attributes","cluster-id":"bd4b2769e12dd4ff","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T17:10:11.512753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T17:10:11.513048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T17:10:11.513085Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T17:10:11.512777Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T17:10:11.51488Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T17:10:11.514889Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.154:2379"}
	{"level":"info","ts":"2024-04-22T17:10:39.90417Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T17:10:39.904311Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-005894","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	{"level":"warn","ts":"2024-04-22T17:10:39.904439Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:10:39.904544Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:10:39.972906Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:10:39.972995Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.154:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T17:10:39.973095Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"10fb7b0a157fc334","current-leader-member-id":"10fb7b0a157fc334"}
	{"level":"info","ts":"2024-04-22T17:10:39.976674Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-04-22T17:10:39.977023Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.154:2380"}
	{"level":"info","ts":"2024-04-22T17:10:39.977099Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-005894","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.154:2380"],"advertise-client-urls":["https://192.168.39.154:2379"]}
	
	
	==> kernel <==
	 17:16:50 up 7 min,  0 users,  load average: 0.07, 0.35, 0.22
	Linux functional-005894 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [133207f2bc757d5937bd9c2d61b2f70b77be7f0fea1edc74d0b05e37188ba30a] <==
	I0422 17:10:56.247297       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 17:10:56.313434       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 17:10:56.368681       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 17:10:56.380124       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 17:11:07.645753       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 17:11:07.697987       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 17:11:13.202868       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.146.93"}
	I0422 17:11:18.864349       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0422 17:11:18.989154       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.24.38"}
	I0422 17:11:19.782310       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.60.82"}
	I0422 17:11:29.980164       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.194.49"}
	I0422 17:11:29.980370       1 trace.go:236] Trace[378723833]: "Create" accept:application/json,audit-id:3ed3cec0-039f-4d97-a74d-cd064b5e97e6,client:192.168.39.1,api-group:,api-version:v1,name:,subresource:,namespace:default,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/default/services,user-agent:kubectl/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:POST (22-Apr-2024 17:11:29.417) (total time: 562ms):
	Trace[378723833]: [562.371874ms] [562.371874ms] END
	E0422 17:11:39.702611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8441->192.168.39.1:58750: use of closed network connection
	I0422 17:11:45.005363       1 trace.go:236] Trace[1218171398]: "Update" accept:application/json, */*,audit-id:b3f010a7-f4ec-4f7a-80a6-117fc9f98ab4,client:192.168.39.154,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (22-Apr-2024 17:11:44.456) (total time: 549ms):
	Trace[1218171398]: ["GuaranteedUpdate etcd3" audit-id:b3f010a7-f4ec-4f7a-80a6-117fc9f98ab4,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 548ms (17:11:44.457)
	Trace[1218171398]:  ---"Txn call completed" 547ms (17:11:45.005)]
	Trace[1218171398]: [549.304477ms] [549.304477ms] END
	I0422 17:11:50.469644       1 controller.go:615] quota admission added evaluator for: namespaces
	I0422 17:11:50.891879       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.212.241"}
	I0422 17:11:50.935927       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.145.153"}
	E0422 17:11:55.254064       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8441->192.168.39.1:60286: use of closed network connection
	E0422 17:11:56.146145       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8441->192.168.39.1:60306: use of closed network connection
	E0422 17:11:58.062610       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8441->192.168.39.1:42304: use of closed network connection
	E0422 17:11:58.786732       1 conn.go:339] Error on socket receive: read tcp 192.168.39.154:8441->192.168.39.1:42328: use of closed network connection
	
	
	==> kube-controller-manager [d01ed3a7ec1af161d1cb042e9a400bc289f50c169113e9824453edca746bb853] <==
	I0422 17:10:25.879351       1 shared_informer.go:320] Caches are synced for attach detach
	I0422 17:10:25.882621       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0422 17:10:25.885988       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0422 17:10:25.888531       1 shared_informer.go:320] Caches are synced for HPA
	I0422 17:10:25.890825       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 17:10:25.894187       1 shared_informer.go:320] Caches are synced for endpoint
	I0422 17:10:25.896305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 17:10:25.898952       1 shared_informer.go:320] Caches are synced for persistent volume
	I0422 17:10:25.904729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.603482ms"
	I0422 17:10:25.905458       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.798µs"
	I0422 17:10:25.908156       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 17:10:25.911510       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0422 17:10:25.916960       1 shared_informer.go:320] Caches are synced for deployment
	I0422 17:10:25.917320       1 shared_informer.go:320] Caches are synced for PVC protection
	I0422 17:10:25.931534       1 shared_informer.go:320] Caches are synced for GC
	I0422 17:10:25.932627       1 shared_informer.go:320] Caches are synced for disruption
	I0422 17:10:25.938609       1 shared_informer.go:320] Caches are synced for taint
	I0422 17:10:25.938715       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0422 17:10:25.938782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-005894"
	I0422 17:10:25.938831       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0422 17:10:25.958021       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 17:10:25.996880       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 17:10:26.420310       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 17:10:26.431489       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 17:10:26.431535       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [f85e5f46a92a21564ef71f57b18679fcfcbc49026fedc10f67800de8f0654afb] <==
	I0422 17:11:30.187271       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="36.387767ms"
	I0422 17:11:30.276049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="88.73873ms"
	I0422 17:11:30.276288       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="138.541µs"
	I0422 17:11:48.396963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="21.559722ms"
	I0422 17:11:48.397058       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-64454c8b5c" duration="38.84µs"
	I0422 17:11:50.591808       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="45.974652ms"
	E0422 17:11:50.591901       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 17:11:50.609906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="17.94569ms"
	E0422 17:11:50.609976       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 17:11:50.618943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="46.838124ms"
	E0422 17:11:50.618992       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 17:11:50.623022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.019284ms"
	E0422 17:11:50.623073       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 17:11:50.625477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="6.410192ms"
	E0422 17:11:50.625537       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0422 17:11:50.710052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="84.469657ms"
	I0422 17:11:50.722140       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="99.042778ms"
	I0422 17:11:50.805795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="83.48336ms"
	I0422 17:11:50.826336       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="116.25042ms"
	I0422 17:11:50.863316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="57.359381ms"
	I0422 17:11:50.864376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="363.12µs"
	I0422 17:11:50.863992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="37.520481ms"
	I0422 17:11:50.864798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="37.927µs"
	I0422 17:11:56.299715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="21.613232ms"
	I0422 17:11:56.302425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="116.605µs"
	
	
	==> kube-proxy [1cfa4d60dfeeb66243d1b11a212385e35c2a225b08affab540f882045842d0a7] <==
	I0422 17:10:56.281033       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:10:56.355936       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0422 17:10:56.435394       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:10:56.435491       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:10:56.435521       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:10:56.438413       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:10:56.438798       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:10:56.438845       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:10:56.439916       1 config.go:192] "Starting service config controller"
	I0422 17:10:56.439983       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:10:56.440021       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:10:56.440037       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:10:56.440685       1 config.go:319] "Starting node config controller"
	I0422 17:10:56.442097       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:10:56.540127       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:10:56.540275       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:10:56.543290       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [802caa6d41407ccf92dd8eb13162b055d5a554f3a9547106b78f46dd0e1334f4] <==
	I0422 17:10:14.394959       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:10:14.414131       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.154"]
	I0422 17:10:14.493308       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:10:14.493402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:10:14.493433       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:10:14.498954       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:10:14.499285       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:10:14.499338       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:10:14.506479       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:10:14.506532       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:10:14.506554       1 config.go:192] "Starting service config controller"
	I0422 17:10:14.506558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:10:14.506908       1 config.go:319] "Starting node config controller"
	I0422 17:10:14.506944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:10:14.606948       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:10:14.606948       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:10:14.607017       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [25200a6415faa4ac198c1859ec661ab6adf9f1808475fbcc67edfd9bd3591bb1] <==
	I0422 17:10:10.128634       1 serving.go:380] Generated self-signed cert in-memory
	W0422 17:10:12.764555       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 17:10:12.764639       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 17:10:12.764666       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 17:10:12.764689       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 17:10:12.832242       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 17:10:12.832282       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:10:12.835855       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 17:10:12.835990       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 17:10:12.836020       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 17:10:12.836036       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 17:10:12.936958       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 17:10:39.901114       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [47843a3d6896c387ecd545e594f6c3bd931940ee8cb798b9b2da6f1e1073d444] <==
	I0422 17:10:52.153351       1 serving.go:380] Generated self-signed cert in-memory
	W0422 17:10:54.475286       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 17:10:54.475392       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 17:10:54.475404       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 17:10:54.475411       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 17:10:54.532152       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 17:10:54.533437       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:10:54.538190       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 17:10:54.538492       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 17:10:54.538525       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 17:10:54.538548       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 17:10:54.638817       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 17:12:50 functional-005894 kubelet[4864]: E0422 17:12:50.363498    4864 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:12:50 functional-005894 kubelet[4864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:12:50 functional-005894 kubelet[4864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:12:50 functional-005894 kubelet[4864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:12:50 functional-005894 kubelet[4864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:13:50 functional-005894 kubelet[4864]: E0422 17:13:50.364008    4864 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:13:50 functional-005894 kubelet[4864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:13:50 functional-005894 kubelet[4864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:13:50 functional-005894 kubelet[4864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:13:50 functional-005894 kubelet[4864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:14:50 functional-005894 kubelet[4864]: E0422 17:14:50.367473    4864 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:14:50 functional-005894 kubelet[4864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:14:50 functional-005894 kubelet[4864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:14:50 functional-005894 kubelet[4864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:14:50 functional-005894 kubelet[4864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:15:50 functional-005894 kubelet[4864]: E0422 17:15:50.363732    4864 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:15:50 functional-005894 kubelet[4864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:15:50 functional-005894 kubelet[4864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:15:50 functional-005894 kubelet[4864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:15:50 functional-005894 kubelet[4864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:16:50 functional-005894 kubelet[4864]: E0422 17:16:50.365489    4864 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:16:50 functional-005894 kubelet[4864]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:16:50 functional-005894 kubelet[4864]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:16:50 functional-005894 kubelet[4864]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:16:50 functional-005894 kubelet[4864]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [d94c4182eef03f2e2973e62ce6a5cc93ba4746b987397bfa4b50059e2a5972fa] <==
	I0422 17:10:56.157324       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 17:10:56.180495       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 17:10:56.182115       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 17:11:13.590158       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 17:11:13.591335       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-005894_553349e2-30e4-4d8a-9ab7-0a287eeae281!
	I0422 17:11:13.602131       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62dca532-9cea-4e7a-b86e-528162a6975e", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-005894_553349e2-30e4-4d8a-9ab7-0a287eeae281 became leader
	I0422 17:11:13.692515       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-005894_553349e2-30e4-4d8a-9ab7-0a287eeae281!
	I0422 17:11:24.312333       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0422 17:11:24.313254       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"09369239-eb45-45ba-9f59-cb9ec68ecf72", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0422 17:11:24.312646       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    6b1db5d0-65d2-4895-a218-690ca45965b8 355 0 2024-04-22 17:09:49 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-04-22 17:09:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-09369239-eb45-45ba-9f59-cb9ec68ecf72 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  09369239-eb45-45ba-9f59-cb9ec68ecf72 677 0 2024-04-22 17:11:24 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-04-22 17:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-04-22 17:11:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0422 17:11:24.314898       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-09369239-eb45-45ba-9f59-cb9ec68ecf72" provisioned
	I0422 17:11:24.314948       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0422 17:11:24.314959       1 volume_store.go:212] Trying to save persistentvolume "pvc-09369239-eb45-45ba-9f59-cb9ec68ecf72"
	I0422 17:11:24.330795       1 volume_store.go:219] persistentvolume "pvc-09369239-eb45-45ba-9f59-cb9ec68ecf72" saved
	I0422 17:11:24.331058       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"09369239-eb45-45ba-9f59-cb9ec68ecf72", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-09369239-eb45-45ba-9f59-cb9ec68ecf72
	
	
	==> storage-provisioner [ea0c620dea2697227c3ec50e3231e741c2212a97fffe9ebab44a17df8be46e69] <==
	I0422 17:10:14.249427       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 17:10:14.272099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 17:10:14.272276       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 17:10:14.301997       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 17:10:14.302136       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-005894_ca9ce03b-9f5c-4ed4-94d2-4a7f058f115a!
	I0422 17:10:14.302188       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62dca532-9cea-4e7a-b86e-528162a6975e", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-005894_ca9ce03b-9f5c-4ed4-94d2-4a7f058f115a became leader
	I0422 17:10:14.402633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-005894_ca9ce03b-9f5c-4ed4-94d2-4a7f058f115a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-005894 -n functional-005894
helpers_test.go:261: (dbg) Run:  kubectl --context functional-005894 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-779776cb65-jccbh
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-005894 describe pod busybox-mount kubernetes-dashboard-779776cb65-jccbh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-005894 describe pod busybox-mount kubernetes-dashboard-779776cb65-jccbh: exit status 1 (67.54207ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-005894/192.168.39.154
	Start Time:       Mon, 22 Apr 2024 17:11:35 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://bd453f5d7b98aaa7ce9c8ca3ab8f161b9343b6786ca7dc8ac9d2a0ef0e8c6b71
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 22 Apr 2024 17:11:50 +0000
	      Finished:     Mon, 22 Apr 2024 17:11:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5ddq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-x5ddq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m15s  default-scheduler  Successfully assigned default/busybox-mount to functional-005894
	  Normal  Pulling    5m14s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m1s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.055s (12.785s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m1s   kubelet            Created container mount-munger
	  Normal  Started    5m1s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-779776cb65-jccbh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-005894 describe pod busybox-mount kubernetes-dashboard-779776cb65-jccbh: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 node stop m02 -v=7 --alsologtostderr
E0422 17:21:24.123667   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:29.244075   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:39.485023   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:59.966095   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:22:40.926639   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.495208487s)

                                                
                                                
-- stdout --
	* Stopping node "ha-025067-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:21:23.596903   34791 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:21:23.597159   34791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:21:23.597170   34791 out.go:304] Setting ErrFile to fd 2...
	I0422 17:21:23.597176   34791 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:21:23.597356   34791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:21:23.597604   34791 mustload.go:65] Loading cluster: ha-025067
	I0422 17:21:23.598001   34791 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:21:23.598023   34791 stop.go:39] StopHost: ha-025067-m02
	I0422 17:21:23.598412   34791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:21:23.598463   34791 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:21:23.615310   34791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34041
	I0422 17:21:23.615755   34791 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:21:23.616318   34791 main.go:141] libmachine: Using API Version  1
	I0422 17:21:23.616335   34791 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:21:23.616698   34791 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:21:23.619208   34791 out.go:177] * Stopping node "ha-025067-m02"  ...
	I0422 17:21:23.620332   34791 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 17:21:23.620356   34791 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:21:23.620580   34791 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 17:21:23.620610   34791 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:21:23.623417   34791 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:21:23.623972   34791 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:21:23.624020   34791 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:21:23.624118   34791 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:21:23.624274   34791 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:21:23.624416   34791 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:21:23.624572   34791 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:21:23.712199   34791 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 17:21:23.766365   34791 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 17:21:23.822475   34791 main.go:141] libmachine: Stopping "ha-025067-m02"...
	I0422 17:21:23.822507   34791 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:21:23.824162   34791 main.go:141] libmachine: (ha-025067-m02) Calling .Stop
	I0422 17:21:23.828139   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 0/120
	I0422 17:21:24.829985   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 1/120
	I0422 17:21:25.831358   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 2/120
	I0422 17:21:26.833797   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 3/120
	I0422 17:21:27.835723   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 4/120
	I0422 17:21:28.837293   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 5/120
	I0422 17:21:29.838788   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 6/120
	I0422 17:21:30.840530   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 7/120
	I0422 17:21:31.842457   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 8/120
	I0422 17:21:32.843811   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 9/120
	I0422 17:21:33.846219   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 10/120
	I0422 17:21:34.848275   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 11/120
	I0422 17:21:35.849465   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 12/120
	I0422 17:21:36.851540   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 13/120
	I0422 17:21:37.853975   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 14/120
	I0422 17:21:38.855915   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 15/120
	I0422 17:21:39.858126   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 16/120
	I0422 17:21:40.859513   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 17/120
	I0422 17:21:41.861142   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 18/120
	I0422 17:21:42.862531   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 19/120
	I0422 17:21:43.864907   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 20/120
	I0422 17:21:44.866925   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 21/120
	I0422 17:21:45.868283   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 22/120
	I0422 17:21:46.869837   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 23/120
	I0422 17:21:47.871662   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 24/120
	I0422 17:21:48.873389   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 25/120
	I0422 17:21:49.875196   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 26/120
	I0422 17:21:50.876446   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 27/120
	I0422 17:21:51.877828   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 28/120
	I0422 17:21:52.879886   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 29/120
	I0422 17:21:53.882097   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 30/120
	I0422 17:21:54.883761   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 31/120
	I0422 17:21:55.885056   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 32/120
	I0422 17:21:56.886410   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 33/120
	I0422 17:21:57.887850   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 34/120
	I0422 17:21:58.889635   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 35/120
	I0422 17:21:59.891770   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 36/120
	I0422 17:22:00.893106   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 37/120
	I0422 17:22:01.894542   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 38/120
	I0422 17:22:02.895960   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 39/120
	I0422 17:22:03.898045   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 40/120
	I0422 17:22:04.899630   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 41/120
	I0422 17:22:05.901897   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 42/120
	I0422 17:22:06.903269   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 43/120
	I0422 17:22:07.905712   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 44/120
	I0422 17:22:08.907855   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 45/120
	I0422 17:22:09.909329   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 46/120
	I0422 17:22:10.910991   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 47/120
	I0422 17:22:11.913340   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 48/120
	I0422 17:22:12.914656   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 49/120
	I0422 17:22:13.916761   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 50/120
	I0422 17:22:14.918474   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 51/120
	I0422 17:22:15.919833   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 52/120
	I0422 17:22:16.921770   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 53/120
	I0422 17:22:17.923094   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 54/120
	I0422 17:22:18.925118   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 55/120
	I0422 17:22:19.926553   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 56/120
	I0422 17:22:20.927920   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 57/120
	I0422 17:22:21.929559   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 58/120
	I0422 17:22:22.931524   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 59/120
	I0422 17:22:23.933351   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 60/120
	I0422 17:22:24.934773   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 61/120
	I0422 17:22:25.936971   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 62/120
	I0422 17:22:26.938599   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 63/120
	I0422 17:22:27.940018   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 64/120
	I0422 17:22:28.942083   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 65/120
	I0422 17:22:29.943388   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 66/120
	I0422 17:22:30.945935   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 67/120
	I0422 17:22:31.947710   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 68/120
	I0422 17:22:32.949799   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 69/120
	I0422 17:22:33.951794   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 70/120
	I0422 17:22:34.953642   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 71/120
	I0422 17:22:35.955026   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 72/120
	I0422 17:22:36.956491   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 73/120
	I0422 17:22:37.958022   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 74/120
	I0422 17:22:38.959525   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 75/120
	I0422 17:22:39.961454   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 76/120
	I0422 17:22:40.963235   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 77/120
	I0422 17:22:41.964685   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 78/120
	I0422 17:22:42.966939   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 79/120
	I0422 17:22:43.968958   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 80/120
	I0422 17:22:44.971003   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 81/120
	I0422 17:22:45.972459   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 82/120
	I0422 17:22:46.973804   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 83/120
	I0422 17:22:47.975961   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 84/120
	I0422 17:22:48.977567   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 85/120
	I0422 17:22:49.979075   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 86/120
	I0422 17:22:50.980623   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 87/120
	I0422 17:22:51.981888   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 88/120
	I0422 17:22:52.983891   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 89/120
	I0422 17:22:53.985916   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 90/120
	I0422 17:22:54.987469   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 91/120
	I0422 17:22:55.989001   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 92/120
	I0422 17:22:56.990320   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 93/120
	I0422 17:22:57.991801   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 94/120
	I0422 17:22:58.993601   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 95/120
	I0422 17:22:59.994986   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 96/120
	I0422 17:23:00.997097   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 97/120
	I0422 17:23:01.998611   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 98/120
	I0422 17:23:02.999940   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 99/120
	I0422 17:23:04.002079   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 100/120
	I0422 17:23:05.003580   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 101/120
	I0422 17:23:06.005614   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 102/120
	I0422 17:23:07.007051   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 103/120
	I0422 17:23:08.008293   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 104/120
	I0422 17:23:09.009849   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 105/120
	I0422 17:23:10.011952   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 106/120
	I0422 17:23:11.013402   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 107/120
	I0422 17:23:12.014822   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 108/120
	I0422 17:23:13.016542   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 109/120
	I0422 17:23:14.018113   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 110/120
	I0422 17:23:15.019359   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 111/120
	I0422 17:23:16.021753   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 112/120
	I0422 17:23:17.023098   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 113/120
	I0422 17:23:18.024499   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 114/120
	I0422 17:23:19.026445   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 115/120
	I0422 17:23:20.027759   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 116/120
	I0422 17:23:21.029719   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 117/120
	I0422 17:23:22.031020   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 118/120
	I0422 17:23:23.032309   34791 main.go:141] libmachine: (ha-025067-m02) Waiting for machine to stop 119/120
	I0422 17:23:24.033090   34791 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 17:23:24.033217   34791 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-025067 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (19.185565752s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:23:24.091164   35244 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:23:24.091439   35244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:23:24.091450   35244 out.go:304] Setting ErrFile to fd 2...
	I0422 17:23:24.091456   35244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:23:24.091670   35244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:23:24.091877   35244 out.go:298] Setting JSON to false
	I0422 17:23:24.091905   35244 mustload.go:65] Loading cluster: ha-025067
	I0422 17:23:24.091969   35244 notify.go:220] Checking for updates...
	I0422 17:23:24.092306   35244 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:23:24.092325   35244 status.go:255] checking status of ha-025067 ...
	I0422 17:23:24.092811   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:24.092877   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:24.116010   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38373
	I0422 17:23:24.116529   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:24.117061   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:24.117090   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:24.117408   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:24.117591   35244 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:23:24.119399   35244 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:23:24.119416   35244 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:23:24.119770   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:24.119814   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:24.134564   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0422 17:23:24.135080   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:24.135591   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:24.135616   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:24.135963   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:24.136168   35244 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:23:24.139293   35244 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:24.139756   35244 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:23:24.139790   35244 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:24.139964   35244 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:23:24.140287   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:24.140334   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:24.154755   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45113
	I0422 17:23:24.155175   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:24.155633   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:24.155667   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:24.156021   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:24.156253   35244 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:23:24.156474   35244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:24.156505   35244 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:23:24.159471   35244 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:24.159982   35244 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:23:24.160007   35244 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:24.160149   35244 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:23:24.160340   35244 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:23:24.160489   35244 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:23:24.160633   35244 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:23:24.249504   35244 ssh_runner.go:195] Run: systemctl --version
	I0422 17:23:24.257579   35244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:24.280613   35244 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:23:24.280645   35244 api_server.go:166] Checking apiserver status ...
	I0422 17:23:24.280680   35244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:23:24.300872   35244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:23:24.313178   35244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:23:24.313236   35244 ssh_runner.go:195] Run: ls
	I0422 17:23:24.319490   35244 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:23:24.323717   35244 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:23:24.323746   35244 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:23:24.323759   35244 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:23:24.323782   35244 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:23:24.324072   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:24.324105   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:24.338991   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0422 17:23:24.339491   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:24.339877   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:24.339894   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:24.340130   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:24.340281   35244 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:23:24.342077   35244 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:23:24.342096   35244 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:23:24.342378   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:24.342412   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:24.356631   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I0422 17:23:24.357048   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:24.357481   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:24.357511   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:24.357764   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:24.357915   35244 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:23:24.360617   35244 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:24.361082   35244 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:23:24.361098   35244 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:24.361285   35244 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:23:24.361598   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:24.361643   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:24.376341   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0422 17:23:24.376726   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:24.377171   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:24.377195   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:24.377582   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:24.377787   35244 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:23:24.377992   35244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:24.378012   35244 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:23:24.380537   35244 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:24.380956   35244 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:23:24.380983   35244 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:24.381079   35244 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:23:24.381267   35244 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:23:24.381410   35244 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:23:24.381545   35244 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:23:42.843360   35244 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:23:42.843465   35244 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:23:42.843528   35244 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:42.843556   35244 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:23:42.843581   35244 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:42.843595   35244 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:23:42.844007   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:42.844069   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:42.858683   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34707
	I0422 17:23:42.859201   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:42.859772   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:42.859794   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:42.860108   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:42.860314   35244 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:23:42.861987   35244 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:23:42.862008   35244 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:23:42.862795   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:42.862861   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:42.877428   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I0422 17:23:42.877916   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:42.878364   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:42.878389   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:42.878679   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:42.878892   35244 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:23:42.881815   35244 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:42.882360   35244 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:23:42.882389   35244 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:42.882567   35244 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:23:42.882868   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:42.882903   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:42.897201   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0422 17:23:42.897658   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:42.898171   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:42.898198   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:42.898541   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:42.898728   35244 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:23:42.898932   35244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:42.898952   35244 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:23:42.901494   35244 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:42.901882   35244 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:23:42.901910   35244 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:42.902027   35244 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:23:42.902226   35244 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:23:42.902376   35244 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:23:42.902538   35244 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:23:42.985452   35244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:43.004776   35244 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:23:43.004805   35244 api_server.go:166] Checking apiserver status ...
	I0422 17:23:43.004836   35244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:23:43.020320   35244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:23:43.039307   35244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:23:43.039387   35244 ssh_runner.go:195] Run: ls
	I0422 17:23:43.044594   35244 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:23:43.051045   35244 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:23:43.051068   35244 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:23:43.051076   35244 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:23:43.051095   35244 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:23:43.051431   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:43.051469   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:43.066111   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39057
	I0422 17:23:43.066479   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:43.066979   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:43.067011   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:43.067333   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:43.067539   35244 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:23:43.069082   35244 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:23:43.069100   35244 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:23:43.069363   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:43.069415   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:43.085807   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38939
	I0422 17:23:43.086192   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:43.086643   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:43.086665   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:43.086969   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:43.087179   35244 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:23:43.089949   35244 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:43.090389   35244 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:23:43.090420   35244 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:43.090561   35244 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:23:43.090843   35244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:43.090886   35244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:43.105489   35244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33087
	I0422 17:23:43.105847   35244 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:43.106306   35244 main.go:141] libmachine: Using API Version  1
	I0422 17:23:43.106325   35244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:43.106606   35244 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:43.106781   35244 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:23:43.106942   35244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:43.106971   35244 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:23:43.109588   35244 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:43.109929   35244 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:23:43.109958   35244 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:43.110073   35244 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:23:43.110246   35244 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:23:43.110381   35244 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:23:43.110499   35244 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:23:43.197281   35244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:43.218368   35244 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-025067 -n ha-025067
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-025067 logs -n 25: (1.467941102s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067:/home/docker/cp-test_ha-025067-m03_ha-025067.txt                      |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067 sudo cat                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067.txt                                |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m04 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp testdata/cp-test.txt                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067:/home/docker/cp-test_ha-025067-m04_ha-025067.txt                      |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067 sudo cat                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067.txt                                |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03:/home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m03 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-025067 node stop m02 -v=7                                                    | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:16:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:16:52.541957   30338 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:16:52.542113   30338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:16:52.542124   30338 out.go:304] Setting ErrFile to fd 2...
	I0422 17:16:52.542131   30338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:16:52.542370   30338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:16:52.542997   30338 out.go:298] Setting JSON to false
	I0422 17:16:52.543963   30338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3558,"bootTime":1713802655,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:16:52.544023   30338 start.go:139] virtualization: kvm guest
	I0422 17:16:52.546239   30338 out.go:177] * [ha-025067] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:16:52.547926   30338 notify.go:220] Checking for updates...
	I0422 17:16:52.549163   30338 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:16:52.550487   30338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:16:52.551790   30338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:16:52.552990   30338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:16:52.554110   30338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:16:52.555258   30338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:16:52.556755   30338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:16:52.591545   30338 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 17:16:52.592934   30338 start.go:297] selected driver: kvm2
	I0422 17:16:52.592952   30338 start.go:901] validating driver "kvm2" against <nil>
	I0422 17:16:52.592970   30338 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:16:52.593731   30338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:16:52.593822   30338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 17:16:52.608623   30338 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 17:16:52.608678   30338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 17:16:52.608883   30338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:16:52.608934   30338 cni.go:84] Creating CNI manager for ""
	I0422 17:16:52.608946   30338 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0422 17:16:52.608953   30338 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0422 17:16:52.609003   30338 start.go:340] cluster config:
	{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0422 17:16:52.609091   30338 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:16:52.611390   30338 out.go:177] * Starting "ha-025067" primary control-plane node in "ha-025067" cluster
	I0422 17:16:52.612836   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:16:52.612868   30338 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 17:16:52.612876   30338 cache.go:56] Caching tarball of preloaded images
	I0422 17:16:52.612948   30338 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:16:52.612959   30338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:16:52.613259   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:16:52.613279   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json: {Name:mkfe9ab9288b859a19abb2db630c3d4dba4d6aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:16:52.613408   30338 start.go:360] acquireMachinesLock for ha-025067: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:16:52.613435   30338 start.go:364] duration metric: took 15.159µs to acquireMachinesLock for "ha-025067"
	I0422 17:16:52.613450   30338 start.go:93] Provisioning new machine with config: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:16:52.613508   30338 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 17:16:52.616169   30338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 17:16:52.616330   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:16:52.616365   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:16:52.630862   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0422 17:16:52.631320   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:16:52.631823   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:16:52.631846   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:16:52.632177   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:16:52.632356   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:16:52.632507   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:16:52.632625   30338 start.go:159] libmachine.API.Create for "ha-025067" (driver="kvm2")
	I0422 17:16:52.632657   30338 client.go:168] LocalClient.Create starting
	I0422 17:16:52.632693   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 17:16:52.632726   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:16:52.632744   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:16:52.632797   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 17:16:52.632815   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:16:52.632829   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:16:52.632844   30338 main.go:141] libmachine: Running pre-create checks...
	I0422 17:16:52.632856   30338 main.go:141] libmachine: (ha-025067) Calling .PreCreateCheck
	I0422 17:16:52.633193   30338 main.go:141] libmachine: (ha-025067) Calling .GetConfigRaw
	I0422 17:16:52.633530   30338 main.go:141] libmachine: Creating machine...
	I0422 17:16:52.633544   30338 main.go:141] libmachine: (ha-025067) Calling .Create
	I0422 17:16:52.633656   30338 main.go:141] libmachine: (ha-025067) Creating KVM machine...
	I0422 17:16:52.634912   30338 main.go:141] libmachine: (ha-025067) DBG | found existing default KVM network
	I0422 17:16:52.635784   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:52.635601   30361 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0422 17:16:52.635809   30338 main.go:141] libmachine: (ha-025067) DBG | created network xml: 
	I0422 17:16:52.635822   30338 main.go:141] libmachine: (ha-025067) DBG | <network>
	I0422 17:16:52.635834   30338 main.go:141] libmachine: (ha-025067) DBG |   <name>mk-ha-025067</name>
	I0422 17:16:52.635843   30338 main.go:141] libmachine: (ha-025067) DBG |   <dns enable='no'/>
	I0422 17:16:52.635861   30338 main.go:141] libmachine: (ha-025067) DBG |   
	I0422 17:16:52.635875   30338 main.go:141] libmachine: (ha-025067) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 17:16:52.635892   30338 main.go:141] libmachine: (ha-025067) DBG |     <dhcp>
	I0422 17:16:52.635921   30338 main.go:141] libmachine: (ha-025067) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 17:16:52.635943   30338 main.go:141] libmachine: (ha-025067) DBG |     </dhcp>
	I0422 17:16:52.635958   30338 main.go:141] libmachine: (ha-025067) DBG |   </ip>
	I0422 17:16:52.635968   30338 main.go:141] libmachine: (ha-025067) DBG |   
	I0422 17:16:52.635977   30338 main.go:141] libmachine: (ha-025067) DBG | </network>
	I0422 17:16:52.635985   30338 main.go:141] libmachine: (ha-025067) DBG | 
	I0422 17:16:52.641459   30338 main.go:141] libmachine: (ha-025067) DBG | trying to create private KVM network mk-ha-025067 192.168.39.0/24...
	I0422 17:16:52.705304   30338 main.go:141] libmachine: (ha-025067) DBG | private KVM network mk-ha-025067 192.168.39.0/24 created
	I0422 17:16:52.705336   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:52.705269   30361 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:16:52.705349   30338 main.go:141] libmachine: (ha-025067) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067 ...
	I0422 17:16:52.705365   30338 main.go:141] libmachine: (ha-025067) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 17:16:52.705520   30338 main.go:141] libmachine: (ha-025067) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 17:16:52.932516   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:52.932370   30361 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa...
	I0422 17:16:53.020479   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:53.020310   30361 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/ha-025067.rawdisk...
	I0422 17:16:53.020535   30338 main.go:141] libmachine: (ha-025067) DBG | Writing magic tar header
	I0422 17:16:53.020550   30338 main.go:141] libmachine: (ha-025067) DBG | Writing SSH key tar header
	I0422 17:16:53.020563   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:53.020480   30361 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067 ...
	I0422 17:16:53.020642   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067
	I0422 17:16:53.020680   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067 (perms=drwx------)
	I0422 17:16:53.020688   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 17:16:53.020695   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 17:16:53.020708   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 17:16:53.020722   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 17:16:53.020738   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 17:16:53.020748   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:16:53.020757   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 17:16:53.020771   30338 main.go:141] libmachine: (ha-025067) Creating domain...
	I0422 17:16:53.020780   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 17:16:53.020788   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 17:16:53.020800   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins
	I0422 17:16:53.020829   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home
	I0422 17:16:53.020852   30338 main.go:141] libmachine: (ha-025067) DBG | Skipping /home - not owner
	I0422 17:16:53.021802   30338 main.go:141] libmachine: (ha-025067) define libvirt domain using xml: 
	I0422 17:16:53.021819   30338 main.go:141] libmachine: (ha-025067) <domain type='kvm'>
	I0422 17:16:53.021828   30338 main.go:141] libmachine: (ha-025067)   <name>ha-025067</name>
	I0422 17:16:53.021835   30338 main.go:141] libmachine: (ha-025067)   <memory unit='MiB'>2200</memory>
	I0422 17:16:53.021843   30338 main.go:141] libmachine: (ha-025067)   <vcpu>2</vcpu>
	I0422 17:16:53.021850   30338 main.go:141] libmachine: (ha-025067)   <features>
	I0422 17:16:53.021859   30338 main.go:141] libmachine: (ha-025067)     <acpi/>
	I0422 17:16:53.021867   30338 main.go:141] libmachine: (ha-025067)     <apic/>
	I0422 17:16:53.021880   30338 main.go:141] libmachine: (ha-025067)     <pae/>
	I0422 17:16:53.021916   30338 main.go:141] libmachine: (ha-025067)     
	I0422 17:16:53.021924   30338 main.go:141] libmachine: (ha-025067)   </features>
	I0422 17:16:53.021936   30338 main.go:141] libmachine: (ha-025067)   <cpu mode='host-passthrough'>
	I0422 17:16:53.021954   30338 main.go:141] libmachine: (ha-025067)   
	I0422 17:16:53.021962   30338 main.go:141] libmachine: (ha-025067)   </cpu>
	I0422 17:16:53.021967   30338 main.go:141] libmachine: (ha-025067)   <os>
	I0422 17:16:53.021975   30338 main.go:141] libmachine: (ha-025067)     <type>hvm</type>
	I0422 17:16:53.022007   30338 main.go:141] libmachine: (ha-025067)     <boot dev='cdrom'/>
	I0422 17:16:53.022034   30338 main.go:141] libmachine: (ha-025067)     <boot dev='hd'/>
	I0422 17:16:53.022046   30338 main.go:141] libmachine: (ha-025067)     <bootmenu enable='no'/>
	I0422 17:16:53.022056   30338 main.go:141] libmachine: (ha-025067)   </os>
	I0422 17:16:53.022063   30338 main.go:141] libmachine: (ha-025067)   <devices>
	I0422 17:16:53.022074   30338 main.go:141] libmachine: (ha-025067)     <disk type='file' device='cdrom'>
	I0422 17:16:53.022090   30338 main.go:141] libmachine: (ha-025067)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/boot2docker.iso'/>
	I0422 17:16:53.022101   30338 main.go:141] libmachine: (ha-025067)       <target dev='hdc' bus='scsi'/>
	I0422 17:16:53.022110   30338 main.go:141] libmachine: (ha-025067)       <readonly/>
	I0422 17:16:53.022120   30338 main.go:141] libmachine: (ha-025067)     </disk>
	I0422 17:16:53.022129   30338 main.go:141] libmachine: (ha-025067)     <disk type='file' device='disk'>
	I0422 17:16:53.022135   30338 main.go:141] libmachine: (ha-025067)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 17:16:53.022146   30338 main.go:141] libmachine: (ha-025067)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/ha-025067.rawdisk'/>
	I0422 17:16:53.022152   30338 main.go:141] libmachine: (ha-025067)       <target dev='hda' bus='virtio'/>
	I0422 17:16:53.022157   30338 main.go:141] libmachine: (ha-025067)     </disk>
	I0422 17:16:53.022164   30338 main.go:141] libmachine: (ha-025067)     <interface type='network'>
	I0422 17:16:53.022172   30338 main.go:141] libmachine: (ha-025067)       <source network='mk-ha-025067'/>
	I0422 17:16:53.022182   30338 main.go:141] libmachine: (ha-025067)       <model type='virtio'/>
	I0422 17:16:53.022189   30338 main.go:141] libmachine: (ha-025067)     </interface>
	I0422 17:16:53.022205   30338 main.go:141] libmachine: (ha-025067)     <interface type='network'>
	I0422 17:16:53.022232   30338 main.go:141] libmachine: (ha-025067)       <source network='default'/>
	I0422 17:16:53.022259   30338 main.go:141] libmachine: (ha-025067)       <model type='virtio'/>
	I0422 17:16:53.022286   30338 main.go:141] libmachine: (ha-025067)     </interface>
	I0422 17:16:53.022302   30338 main.go:141] libmachine: (ha-025067)     <serial type='pty'>
	I0422 17:16:53.022316   30338 main.go:141] libmachine: (ha-025067)       <target port='0'/>
	I0422 17:16:53.022327   30338 main.go:141] libmachine: (ha-025067)     </serial>
	I0422 17:16:53.022338   30338 main.go:141] libmachine: (ha-025067)     <console type='pty'>
	I0422 17:16:53.022355   30338 main.go:141] libmachine: (ha-025067)       <target type='serial' port='0'/>
	I0422 17:16:53.022373   30338 main.go:141] libmachine: (ha-025067)     </console>
	I0422 17:16:53.022384   30338 main.go:141] libmachine: (ha-025067)     <rng model='virtio'>
	I0422 17:16:53.022394   30338 main.go:141] libmachine: (ha-025067)       <backend model='random'>/dev/random</backend>
	I0422 17:16:53.022404   30338 main.go:141] libmachine: (ha-025067)     </rng>
	I0422 17:16:53.022412   30338 main.go:141] libmachine: (ha-025067)     
	I0422 17:16:53.022426   30338 main.go:141] libmachine: (ha-025067)     
	I0422 17:16:53.022439   30338 main.go:141] libmachine: (ha-025067)   </devices>
	I0422 17:16:53.022449   30338 main.go:141] libmachine: (ha-025067) </domain>
	I0422 17:16:53.022460   30338 main.go:141] libmachine: (ha-025067) 
	I0422 17:16:53.026948   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:a5:56:10 in network default
	I0422 17:16:53.027518   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:53.027556   30338 main.go:141] libmachine: (ha-025067) Ensuring networks are active...
	I0422 17:16:53.028145   30338 main.go:141] libmachine: (ha-025067) Ensuring network default is active
	I0422 17:16:53.028505   30338 main.go:141] libmachine: (ha-025067) Ensuring network mk-ha-025067 is active
	I0422 17:16:53.028967   30338 main.go:141] libmachine: (ha-025067) Getting domain xml...
	I0422 17:16:53.029600   30338 main.go:141] libmachine: (ha-025067) Creating domain...
	I0422 17:16:54.194304   30338 main.go:141] libmachine: (ha-025067) Waiting to get IP...
	I0422 17:16:54.195315   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:54.195793   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:54.195821   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:54.195765   30361 retry.go:31] will retry after 207.971302ms: waiting for machine to come up
	I0422 17:16:54.405368   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:54.405849   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:54.405881   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:54.405803   30361 retry.go:31] will retry after 339.912064ms: waiting for machine to come up
	I0422 17:16:54.747484   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:54.747869   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:54.747901   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:54.747825   30361 retry.go:31] will retry after 306.603999ms: waiting for machine to come up
	I0422 17:16:55.056260   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:55.056704   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:55.056735   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:55.056654   30361 retry.go:31] will retry after 408.670158ms: waiting for machine to come up
	I0422 17:16:55.467196   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:55.467604   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:55.467629   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:55.467564   30361 retry.go:31] will retry after 638.292083ms: waiting for machine to come up
	I0422 17:16:56.107331   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:56.107755   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:56.107794   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:56.107719   30361 retry.go:31] will retry after 790.345835ms: waiting for machine to come up
	I0422 17:16:56.899646   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:56.900019   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:56.900054   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:56.899992   30361 retry.go:31] will retry after 896.720809ms: waiting for machine to come up
	I0422 17:16:57.798561   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:57.798968   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:57.799012   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:57.798920   30361 retry.go:31] will retry after 1.465416505s: waiting for machine to come up
	I0422 17:16:59.266468   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:59.266813   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:59.266866   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:59.266784   30361 retry.go:31] will retry after 1.392901232s: waiting for machine to come up
	I0422 17:17:00.661353   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:00.661718   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:00.661741   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:00.661663   30361 retry.go:31] will retry after 2.128283213s: waiting for machine to come up
	I0422 17:17:02.791467   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:02.791788   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:02.791814   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:02.791745   30361 retry.go:31] will retry after 1.856350174s: waiting for machine to come up
	I0422 17:17:04.649259   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:04.649742   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:04.649782   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:04.649687   30361 retry.go:31] will retry after 2.216077949s: waiting for machine to come up
	I0422 17:17:06.869019   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:06.869529   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:06.869553   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:06.869465   30361 retry.go:31] will retry after 3.742529286s: waiting for machine to come up
	I0422 17:17:10.615809   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:10.616365   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:10.616394   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:10.616315   30361 retry.go:31] will retry after 4.954168816s: waiting for machine to come up
	I0422 17:17:15.574406   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.574869   30338 main.go:141] libmachine: (ha-025067) Found IP for machine: 192.168.39.22
	I0422 17:17:15.574892   30338 main.go:141] libmachine: (ha-025067) Reserving static IP address...
	I0422 17:17:15.574904   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has current primary IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.575257   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find host DHCP lease matching {name: "ha-025067", mac: "52:54:00:8b:2a:21", ip: "192.168.39.22"} in network mk-ha-025067
	I0422 17:17:15.646280   30338 main.go:141] libmachine: (ha-025067) DBG | Getting to WaitForSSH function...
	I0422 17:17:15.646318   30338 main.go:141] libmachine: (ha-025067) Reserved static IP address: 192.168.39.22
	I0422 17:17:15.646330   30338 main.go:141] libmachine: (ha-025067) Waiting for SSH to be available...
	I0422 17:17:15.648969   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.649518   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:15.649544   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.649677   30338 main.go:141] libmachine: (ha-025067) DBG | Using SSH client type: external
	I0422 17:17:15.650157   30338 main.go:141] libmachine: (ha-025067) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa (-rw-------)
	I0422 17:17:15.650204   30338 main.go:141] libmachine: (ha-025067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 17:17:15.650230   30338 main.go:141] libmachine: (ha-025067) DBG | About to run SSH command:
	I0422 17:17:15.650245   30338 main.go:141] libmachine: (ha-025067) DBG | exit 0
	I0422 17:17:15.779021   30338 main.go:141] libmachine: (ha-025067) DBG | SSH cmd err, output: <nil>: 
	I0422 17:17:15.779271   30338 main.go:141] libmachine: (ha-025067) KVM machine creation complete!
	I0422 17:17:15.779583   30338 main.go:141] libmachine: (ha-025067) Calling .GetConfigRaw
	I0422 17:17:15.780108   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:15.780287   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:15.780417   30338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 17:17:15.780429   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:15.781557   30338 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 17:17:15.781572   30338 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 17:17:15.781579   30338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 17:17:15.781586   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:15.783633   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.784006   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:15.784032   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.784135   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:15.784322   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.784453   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.784555   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:15.784718   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:15.784950   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:15.784962   30338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 17:17:15.894473   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:17:15.894500   30338 main.go:141] libmachine: Detecting the provisioner...
	I0422 17:17:15.894511   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:15.897294   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.897667   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:15.897692   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.897818   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:15.898015   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.898147   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.898290   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:15.898464   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:15.898654   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:15.898674   30338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 17:17:16.008050   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 17:17:16.008142   30338 main.go:141] libmachine: found compatible host: buildroot
	I0422 17:17:16.008157   30338 main.go:141] libmachine: Provisioning with buildroot...
	I0422 17:17:16.008170   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:17:16.008415   30338 buildroot.go:166] provisioning hostname "ha-025067"
	I0422 17:17:16.008437   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:17:16.008633   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.011139   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.011500   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.011520   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.011691   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.011859   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.012004   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.012132   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.012300   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.012496   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.012509   30338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067 && echo "ha-025067" | sudo tee /etc/hostname
	I0422 17:17:16.134495   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067
	
	I0422 17:17:16.134535   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.137003   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.137308   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.137334   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.137493   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.137714   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.137909   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.138028   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.138205   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.138354   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.138369   30338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:17:16.256974   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:17:16.257003   30338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:17:16.257071   30338 buildroot.go:174] setting up certificates
	I0422 17:17:16.257083   30338 provision.go:84] configureAuth start
	I0422 17:17:16.257097   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:17:16.257367   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:16.259679   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.260084   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.260120   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.260243   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.262444   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.262813   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.262835   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.262965   30338 provision.go:143] copyHostCerts
	I0422 17:17:16.263004   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:17:16.263106   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:17:16.263167   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:17:16.263251   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:17:16.263340   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:17:16.263359   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:17:16.263366   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:17:16.263391   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:17:16.263441   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:17:16.263459   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:17:16.263466   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:17:16.263491   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:17:16.263579   30338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067 san=[127.0.0.1 192.168.39.22 ha-025067 localhost minikube]
	I0422 17:17:16.351025   30338 provision.go:177] copyRemoteCerts
	I0422 17:17:16.351085   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:17:16.351106   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.353536   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.353827   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.353862   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.354018   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.354199   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.354331   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.354470   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:16.442349   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:17:16.442413   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:17:16.467844   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:17:16.467923   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0422 17:17:16.493373   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:17:16.493431   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 17:17:16.518912   30338 provision.go:87] duration metric: took 261.814442ms to configureAuth
	I0422 17:17:16.518945   30338 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:17:16.519215   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:17:16.519352   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.522066   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.522405   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.522432   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.522596   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.522786   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.522973   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.523098   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.523251   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.523438   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.523469   30338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:17:16.797209   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:17:16.797236   30338 main.go:141] libmachine: Checking connection to Docker...
	I0422 17:17:16.797244   30338 main.go:141] libmachine: (ha-025067) Calling .GetURL
	I0422 17:17:16.798626   30338 main.go:141] libmachine: (ha-025067) DBG | Using libvirt version 6000000
	I0422 17:17:16.801200   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.801514   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.801546   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.801723   30338 main.go:141] libmachine: Docker is up and running!
	I0422 17:17:16.801735   30338 main.go:141] libmachine: Reticulating splines...
	I0422 17:17:16.801741   30338 client.go:171] duration metric: took 24.169074993s to LocalClient.Create
	I0422 17:17:16.801764   30338 start.go:167] duration metric: took 24.169140026s to libmachine.API.Create "ha-025067"
	I0422 17:17:16.801772   30338 start.go:293] postStartSetup for "ha-025067" (driver="kvm2")
	I0422 17:17:16.801785   30338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:17:16.801799   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:16.802012   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:17:16.802030   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.804046   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.804307   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.804334   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.804441   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.804627   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.804757   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.804888   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:16.890248   30338 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:17:16.894951   30338 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:17:16.894976   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:17:16.895050   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:17:16.895264   30338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:17:16.895285   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:17:16.895403   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:17:16.905715   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:17:16.931370   30338 start.go:296] duration metric: took 129.583412ms for postStartSetup
	I0422 17:17:16.931429   30338 main.go:141] libmachine: (ha-025067) Calling .GetConfigRaw
	I0422 17:17:16.931987   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:16.934618   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.934947   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.934983   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.935214   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:17:16.935403   30338 start.go:128] duration metric: took 24.321886362s to createHost
	I0422 17:17:16.935427   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.937763   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.938043   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.938070   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.938181   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.938369   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.938536   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.938712   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.938866   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.939028   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.939042   30338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:17:17.048077   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806237.020189542
	
	I0422 17:17:17.048098   30338 fix.go:216] guest clock: 1713806237.020189542
	I0422 17:17:17.048105   30338 fix.go:229] Guest: 2024-04-22 17:17:17.020189542 +0000 UTC Remote: 2024-04-22 17:17:16.93541497 +0000 UTC m=+24.442682821 (delta=84.774572ms)
	I0422 17:17:17.048135   30338 fix.go:200] guest clock delta is within tolerance: 84.774572ms
	I0422 17:17:17.048145   30338 start.go:83] releasing machines lock for "ha-025067", held for 24.434699931s
	I0422 17:17:17.048165   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.048466   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:17.050647   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.051016   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:17.051044   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.051177   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.051722   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.051884   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.051970   30338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:17:17.052008   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:17.052079   30338 ssh_runner.go:195] Run: cat /version.json
	I0422 17:17:17.052105   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:17.054561   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.054870   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:17.054894   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.054916   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.055071   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:17.055251   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:17.055338   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:17.055372   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.055429   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:17.055523   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:17.055586   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:17.055657   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:17.055807   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:17.055945   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:17.172058   30338 ssh_runner.go:195] Run: systemctl --version
	I0422 17:17:17.178255   30338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:17:17.351734   30338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:17:17.357761   30338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:17:17.357826   30338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:17:17.374861   30338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 17:17:17.374884   30338 start.go:494] detecting cgroup driver to use...
	I0422 17:17:17.374944   30338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:17:17.391764   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:17:17.406363   30338 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:17:17.406446   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:17:17.420829   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:17:17.434640   30338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:17:17.560577   30338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:17:17.720575   30338 docker.go:233] disabling docker service ...
	I0422 17:17:17.720640   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:17:17.736345   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:17:17.749254   30338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:17:17.888250   30338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:17:17.998872   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:17:18.013867   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:17:18.033660   30338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:17:18.033729   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.045858   30338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:17:18.045930   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.057762   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.070225   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.082031   30338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:17:18.093847   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.105155   30338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.123633   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.135530   30338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:17:18.146350   30338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 17:17:18.146416   30338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 17:17:18.160607   30338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:17:18.171248   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:17:18.285103   30338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:17:18.427364   30338 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:17:18.427430   30338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:17:18.432215   30338 start.go:562] Will wait 60s for crictl version
	I0422 17:17:18.432261   30338 ssh_runner.go:195] Run: which crictl
	I0422 17:17:18.436087   30338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:17:18.474342   30338 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:17:18.474427   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:17:18.502366   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:17:18.533577   30338 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:17:18.535243   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:18.537807   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:18.538177   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:18.538206   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:18.538458   30338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:17:18.542748   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:17:18.556805   30338 kubeadm.go:877] updating cluster {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 17:17:18.556922   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:17:18.556963   30338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:17:18.590355   30338 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 17:17:18.590416   30338 ssh_runner.go:195] Run: which lz4
	I0422 17:17:18.594419   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0422 17:17:18.594510   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 17:17:18.598660   30338 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 17:17:18.598686   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 17:17:20.067394   30338 crio.go:462] duration metric: took 1.472907327s to copy over tarball
	I0422 17:17:20.067470   30338 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 17:17:22.341420   30338 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273918539s)
	I0422 17:17:22.341462   30338 crio.go:469] duration metric: took 2.274021881s to extract the tarball
	I0422 17:17:22.341473   30338 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 17:17:22.380285   30338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:17:22.430394   30338 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:17:22.430417   30338 cache_images.go:84] Images are preloaded, skipping loading
	I0422 17:17:22.430423   30338 kubeadm.go:928] updating node { 192.168.39.22 8443 v1.30.0 crio true true} ...
	I0422 17:17:22.430517   30338 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:17:22.430577   30338 ssh_runner.go:195] Run: crio config
	I0422 17:17:22.482051   30338 cni.go:84] Creating CNI manager for ""
	I0422 17:17:22.482073   30338 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 17:17:22.482085   30338 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 17:17:22.482104   30338 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-025067 NodeName:ha-025067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 17:17:22.482226   30338 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-025067"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 17:17:22.482252   30338 kube-vip.go:111] generating kube-vip config ...
	I0422 17:17:22.482289   30338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:17:22.499572   30338 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:17:22.499685   30338 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:17:22.499788   30338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:17:22.511530   30338 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 17:17:22.511598   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 17:17:22.522317   30338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0422 17:17:22.539777   30338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:17:22.558702   30338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0422 17:17:22.578843   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0422 17:17:22.598371   30338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:17:22.602486   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:17:22.617761   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:17:22.732562   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:17:22.750808   30338 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.22
	I0422 17:17:22.750831   30338 certs.go:194] generating shared ca certs ...
	I0422 17:17:22.750850   30338 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:22.751000   30338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:17:22.751050   30338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:17:22.751062   30338 certs.go:256] generating profile certs ...
	I0422 17:17:22.751114   30338 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:17:22.751146   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt with IP's: []
	I0422 17:17:22.915108   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt ...
	I0422 17:17:22.915152   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt: {Name:mk430bbc2ed98d56b9d3bf935e45898d0ff4a313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:22.915336   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key ...
	I0422 17:17:22.915357   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key: {Name:mkfbfc636d8b8074e5a1767eaca4ba73158825b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:22.915457   30338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a
	I0422 17:17:22.915476   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.254]
	I0422 17:17:23.036108   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a ...
	I0422 17:17:23.036141   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a: {Name:mk10f3464e3fe632e615efa17cc1af5344bd012e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.036318   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a ...
	I0422 17:17:23.036342   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a: {Name:mk537b75872841f8afa81021b50c851254a7f89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.036439   30338 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:17:23.036527   30338 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:17:23.036605   30338 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:17:23.036625   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt with IP's: []
	I0422 17:17:23.187088   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt ...
	I0422 17:17:23.187136   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt: {Name:mkda5bf715a5cd070a437870bf07f33adca40e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.187314   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key ...
	I0422 17:17:23.187328   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key: {Name:mkc14c2c1ca9036d53c70ad1a0a708516fe753a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.187425   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:17:23.187446   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:17:23.187462   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:17:23.187477   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:17:23.187495   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:17:23.187514   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:17:23.187532   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:17:23.187558   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:17:23.187618   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:17:23.187663   30338 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:17:23.187677   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:17:23.187710   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:17:23.187740   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:17:23.187776   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:17:23.187831   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:17:23.187879   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.187900   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.187918   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.188465   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:17:23.216928   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:17:23.244740   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:17:23.270485   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:17:23.296668   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 17:17:23.323400   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 17:17:23.350450   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:17:23.378433   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:17:23.405755   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:17:23.432548   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:17:23.459972   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:17:23.485881   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 17:17:23.505928   30338 ssh_runner.go:195] Run: openssl version
	I0422 17:17:23.521102   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:17:23.542466   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.547714   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.547764   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.554814   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:17:23.570174   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:17:23.581300   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.586029   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.586073   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.592061   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:17:23.603046   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:17:23.618173   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.624758   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.624818   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.630925   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:17:23.642412   30338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:17:23.646609   30338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 17:17:23.646664   30338 kubeadm.go:391] StartCluster: {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:17:23.646754   30338 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 17:17:23.646803   30338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 17:17:23.685745   30338 cri.go:89] found id: ""
	I0422 17:17:23.685824   30338 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 17:17:23.696735   30338 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 17:17:23.707358   30338 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 17:17:23.717929   30338 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 17:17:23.717954   30338 kubeadm.go:156] found existing configuration files:
	
	I0422 17:17:23.718003   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 17:17:23.727799   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 17:17:23.727861   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 17:17:23.738774   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 17:17:23.748975   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 17:17:23.749033   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 17:17:23.759471   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 17:17:23.769183   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 17:17:23.769249   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 17:17:23.779355   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 17:17:23.789246   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 17:17:23.789307   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 17:17:23.800177   30338 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 17:17:23.900715   30338 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 17:17:23.900777   30338 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 17:17:24.027754   30338 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 17:17:24.027894   30338 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 17:17:24.028035   30338 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 17:17:24.245420   30338 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 17:17:24.378712   30338 out.go:204]   - Generating certificates and keys ...
	I0422 17:17:24.378833   30338 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 17:17:24.378947   30338 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 17:17:24.542801   30338 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 17:17:24.724926   30338 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 17:17:24.824807   30338 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 17:17:25.104482   30338 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 17:17:25.235796   30338 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 17:17:25.235913   30338 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-025067 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0422 17:17:25.360581   30338 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 17:17:25.360791   30338 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-025067 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0422 17:17:25.486631   30338 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 17:17:25.585077   30338 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 17:17:25.929873   30338 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 17:17:25.930367   30338 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 17:17:26.328948   30338 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 17:17:26.449725   30338 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 17:17:26.539056   30338 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 17:17:26.722381   30338 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 17:17:26.836166   30338 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 17:17:26.836622   30338 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 17:17:26.841711   30338 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 17:17:26.843751   30338 out.go:204]   - Booting up control plane ...
	I0422 17:17:26.843849   30338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 17:17:26.843953   30338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 17:17:26.844039   30338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 17:17:26.859962   30338 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 17:17:26.860846   30338 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 17:17:26.860919   30338 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 17:17:26.990901   30338 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 17:17:26.991009   30338 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 17:17:27.491088   30338 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.954848ms
	I0422 17:17:27.491224   30338 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 17:17:33.469170   30338 kubeadm.go:309] [api-check] The API server is healthy after 5.982033292s
	I0422 17:17:33.481806   30338 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 17:17:33.506950   30338 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 17:17:34.038663   30338 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 17:17:34.038823   30338 kubeadm.go:309] [mark-control-plane] Marking the node ha-025067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 17:17:34.060763   30338 kubeadm.go:309] [bootstrap-token] Using token: 7rz1nt.dzwgo1uwph8u4dan
	I0422 17:17:34.062632   30338 out.go:204]   - Configuring RBAC rules ...
	I0422 17:17:34.062795   30338 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 17:17:34.074146   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 17:17:34.081674   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 17:17:34.085022   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 17:17:34.088276   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 17:17:34.091886   30338 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 17:17:34.106861   30338 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 17:17:34.359105   30338 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 17:17:34.875599   30338 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 17:17:34.876676   30338 kubeadm.go:309] 
	I0422 17:17:34.876754   30338 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 17:17:34.876780   30338 kubeadm.go:309] 
	I0422 17:17:34.876868   30338 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 17:17:34.876880   30338 kubeadm.go:309] 
	I0422 17:17:34.876956   30338 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 17:17:34.877039   30338 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 17:17:34.877115   30338 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 17:17:34.877134   30338 kubeadm.go:309] 
	I0422 17:17:34.877205   30338 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 17:17:34.877216   30338 kubeadm.go:309] 
	I0422 17:17:34.877277   30338 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 17:17:34.877284   30338 kubeadm.go:309] 
	I0422 17:17:34.877323   30338 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 17:17:34.877382   30338 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 17:17:34.877457   30338 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 17:17:34.877465   30338 kubeadm.go:309] 
	I0422 17:17:34.877542   30338 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 17:17:34.877644   30338 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 17:17:34.877654   30338 kubeadm.go:309] 
	I0422 17:17:34.877721   30338 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7rz1nt.dzwgo1uwph8u4dan \
	I0422 17:17:34.877858   30338 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 17:17:34.877880   30338 kubeadm.go:309] 	--control-plane 
	I0422 17:17:34.877884   30338 kubeadm.go:309] 
	I0422 17:17:34.878024   30338 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 17:17:34.878044   30338 kubeadm.go:309] 
	I0422 17:17:34.878149   30338 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7rz1nt.dzwgo1uwph8u4dan \
	I0422 17:17:34.878261   30338 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 17:17:34.878930   30338 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 17:17:34.878959   30338 cni.go:84] Creating CNI manager for ""
	I0422 17:17:34.878966   30338 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 17:17:34.880738   30338 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0422 17:17:34.881879   30338 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0422 17:17:34.887775   30338 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 17:17:34.887794   30338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0422 17:17:34.913083   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 17:17:35.280059   30338 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 17:17:35.280163   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:35.280176   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-025067 minikube.k8s.io/updated_at=2024_04_22T17_17_35_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=ha-025067 minikube.k8s.io/primary=true
	I0422 17:17:35.404189   30338 ops.go:34] apiserver oom_adj: -16
	I0422 17:17:35.421996   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:35.922431   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:36.422389   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:36.922341   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:37.422868   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:37.922959   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:38.422871   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:38.922933   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:39.422984   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:39.922589   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:40.423080   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:40.922709   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:41.423065   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:41.922294   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:42.422583   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:42.922922   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:43.422932   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:43.922094   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:44.422125   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:44.922968   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:45.422671   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:45.922862   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:46.422038   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:46.922225   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:47.422202   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:47.922994   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:48.024774   30338 kubeadm.go:1107] duration metric: took 12.744673877s to wait for elevateKubeSystemPrivileges
	W0422 17:17:48.024809   30338 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 17:17:48.024820   30338 kubeadm.go:393] duration metric: took 24.378158938s to StartCluster
	I0422 17:17:48.024837   30338 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:48.024911   30338 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:17:48.025566   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:48.025776   30338 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:17:48.025790   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 17:17:48.025804   30338 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 17:17:48.025852   30338 addons.go:69] Setting storage-provisioner=true in profile "ha-025067"
	I0422 17:17:48.025883   30338 addons.go:234] Setting addon storage-provisioner=true in "ha-025067"
	I0422 17:17:48.025901   30338 addons.go:69] Setting default-storageclass=true in profile "ha-025067"
	I0422 17:17:48.025918   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:17:48.025927   30338 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-025067"
	I0422 17:17:48.025798   30338 start.go:240] waiting for startup goroutines ...
	I0422 17:17:48.026026   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:17:48.026312   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.026312   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.026368   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.026340   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.041188   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0422 17:17:48.041188   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0422 17:17:48.041652   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.041804   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.042213   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.042240   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.042323   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.042345   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.042568   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.042644   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.042826   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:48.043097   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.043118   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.045097   30338 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:17:48.045434   30338 kapi.go:59] client config for ha-025067: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 17:17:48.045942   30338 cert_rotation.go:137] Starting client certificate rotation controller
	I0422 17:17:48.046189   30338 addons.go:234] Setting addon default-storageclass=true in "ha-025067"
	I0422 17:17:48.046226   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:17:48.046491   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.046525   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.058471   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I0422 17:17:48.058926   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.059453   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.059473   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.059823   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.060049   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:48.060592   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0422 17:17:48.060934   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.061449   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.061469   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.061829   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.061886   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:48.062291   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.062312   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.064567   30338 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 17:17:48.066096   30338 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 17:17:48.066111   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 17:17:48.066131   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:48.069330   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.069810   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:48.069837   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.069995   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:48.070200   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:48.070365   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:48.070527   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:48.077528   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0422 17:17:48.077920   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.078416   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.078445   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.078788   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.078989   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:48.080708   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:48.080950   30338 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 17:17:48.080969   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 17:17:48.080992   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:48.083420   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.083908   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:48.083938   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.084085   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:48.084254   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:48.084394   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:48.084538   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:48.198341   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 17:17:48.236215   30338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 17:17:48.260479   30338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 17:17:48.808081   30338 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 17:17:48.808173   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:48.808191   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:48.808571   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:48.808590   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:48.808616   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:48.808628   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:48.808864   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:48.808880   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:48.808911   30338 main.go:141] libmachine: (ha-025067) DBG | Closing plugin on server side
	I0422 17:17:48.808996   30338 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0422 17:17:48.809009   30338 round_trippers.go:469] Request Headers:
	I0422 17:17:48.809019   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:17:48.809023   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:17:48.819866   30338 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0422 17:17:48.820646   30338 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0422 17:17:48.820666   30338 round_trippers.go:469] Request Headers:
	I0422 17:17:48.820677   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:17:48.820682   30338 round_trippers.go:473]     Content-Type: application/json
	I0422 17:17:48.820687   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:17:48.823392   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:17:48.823590   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:48.823607   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:48.823950   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:48.823965   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:49.039496   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:49.039525   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:49.039800   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:49.039839   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:49.039878   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:49.039901   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:49.040187   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:49.040205   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:49.040230   30338 main.go:141] libmachine: (ha-025067) DBG | Closing plugin on server side
	I0422 17:17:49.042419   30338 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0422 17:17:49.043697   30338 addons.go:505] duration metric: took 1.017890194s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0422 17:17:49.043728   30338 start.go:245] waiting for cluster config update ...
	I0422 17:17:49.043739   30338 start.go:254] writing updated cluster config ...
	I0422 17:17:49.045267   30338 out.go:177] 
	I0422 17:17:49.046638   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:17:49.046697   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:17:49.048449   30338 out.go:177] * Starting "ha-025067-m02" control-plane node in "ha-025067" cluster
	I0422 17:17:49.049972   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:17:49.049997   30338 cache.go:56] Caching tarball of preloaded images
	I0422 17:17:49.050097   30338 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:17:49.050112   30338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:17:49.050178   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:17:49.050551   30338 start.go:360] acquireMachinesLock for ha-025067-m02: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:17:49.050595   30338 start.go:364] duration metric: took 24.853µs to acquireMachinesLock for "ha-025067-m02"
	I0422 17:17:49.050608   30338 start.go:93] Provisioning new machine with config: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:17:49.050679   30338 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0422 17:17:49.052323   30338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 17:17:49.052399   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:49.052422   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:49.067500   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0422 17:17:49.068031   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:49.068556   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:49.068577   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:49.068921   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:49.069139   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:17:49.069343   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:17:49.069529   30338 start.go:159] libmachine.API.Create for "ha-025067" (driver="kvm2")
	I0422 17:17:49.069555   30338 client.go:168] LocalClient.Create starting
	I0422 17:17:49.069594   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 17:17:49.069637   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:17:49.069675   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:17:49.069745   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 17:17:49.069775   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:17:49.069792   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:17:49.069819   30338 main.go:141] libmachine: Running pre-create checks...
	I0422 17:17:49.069832   30338 main.go:141] libmachine: (ha-025067-m02) Calling .PreCreateCheck
	I0422 17:17:49.069994   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetConfigRaw
	I0422 17:17:49.070438   30338 main.go:141] libmachine: Creating machine...
	I0422 17:17:49.070458   30338 main.go:141] libmachine: (ha-025067-m02) Calling .Create
	I0422 17:17:49.070582   30338 main.go:141] libmachine: (ha-025067-m02) Creating KVM machine...
	I0422 17:17:49.071942   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found existing default KVM network
	I0422 17:17:49.072032   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found existing private KVM network mk-ha-025067
	I0422 17:17:49.072217   30338 main.go:141] libmachine: (ha-025067-m02) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02 ...
	I0422 17:17:49.072242   30338 main.go:141] libmachine: (ha-025067-m02) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 17:17:49.072289   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.072190   31195 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:17:49.072371   30338 main.go:141] libmachine: (ha-025067-m02) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 17:17:49.285347   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.285226   31195 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa...
	I0422 17:17:49.423872   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.423692   31195 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/ha-025067-m02.rawdisk...
	I0422 17:17:49.423922   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Writing magic tar header
	I0422 17:17:49.423940   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Writing SSH key tar header
	I0422 17:17:49.423952   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.423844   31195 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02 ...
	I0422 17:17:49.423968   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02
	I0422 17:17:49.424016   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02 (perms=drwx------)
	I0422 17:17:49.424049   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 17:17:49.424075   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 17:17:49.424085   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 17:17:49.424095   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 17:17:49.424103   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 17:17:49.424112   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:17:49.424121   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 17:17:49.424131   30338 main.go:141] libmachine: (ha-025067-m02) Creating domain...
	I0422 17:17:49.424142   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 17:17:49.424151   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 17:17:49.424159   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins
	I0422 17:17:49.424167   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home
	I0422 17:17:49.424173   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Skipping /home - not owner
	I0422 17:17:49.425159   30338 main.go:141] libmachine: (ha-025067-m02) define libvirt domain using xml: 
	I0422 17:17:49.425179   30338 main.go:141] libmachine: (ha-025067-m02) <domain type='kvm'>
	I0422 17:17:49.425188   30338 main.go:141] libmachine: (ha-025067-m02)   <name>ha-025067-m02</name>
	I0422 17:17:49.425195   30338 main.go:141] libmachine: (ha-025067-m02)   <memory unit='MiB'>2200</memory>
	I0422 17:17:49.425203   30338 main.go:141] libmachine: (ha-025067-m02)   <vcpu>2</vcpu>
	I0422 17:17:49.425209   30338 main.go:141] libmachine: (ha-025067-m02)   <features>
	I0422 17:17:49.425216   30338 main.go:141] libmachine: (ha-025067-m02)     <acpi/>
	I0422 17:17:49.425233   30338 main.go:141] libmachine: (ha-025067-m02)     <apic/>
	I0422 17:17:49.425246   30338 main.go:141] libmachine: (ha-025067-m02)     <pae/>
	I0422 17:17:49.425258   30338 main.go:141] libmachine: (ha-025067-m02)     
	I0422 17:17:49.425293   30338 main.go:141] libmachine: (ha-025067-m02)   </features>
	I0422 17:17:49.425328   30338 main.go:141] libmachine: (ha-025067-m02)   <cpu mode='host-passthrough'>
	I0422 17:17:49.425364   30338 main.go:141] libmachine: (ha-025067-m02)   
	I0422 17:17:49.425394   30338 main.go:141] libmachine: (ha-025067-m02)   </cpu>
	I0422 17:17:49.425402   30338 main.go:141] libmachine: (ha-025067-m02)   <os>
	I0422 17:17:49.425411   30338 main.go:141] libmachine: (ha-025067-m02)     <type>hvm</type>
	I0422 17:17:49.425420   30338 main.go:141] libmachine: (ha-025067-m02)     <boot dev='cdrom'/>
	I0422 17:17:49.425429   30338 main.go:141] libmachine: (ha-025067-m02)     <boot dev='hd'/>
	I0422 17:17:49.425436   30338 main.go:141] libmachine: (ha-025067-m02)     <bootmenu enable='no'/>
	I0422 17:17:49.425450   30338 main.go:141] libmachine: (ha-025067-m02)   </os>
	I0422 17:17:49.425469   30338 main.go:141] libmachine: (ha-025067-m02)   <devices>
	I0422 17:17:49.425488   30338 main.go:141] libmachine: (ha-025067-m02)     <disk type='file' device='cdrom'>
	I0422 17:17:49.425505   30338 main.go:141] libmachine: (ha-025067-m02)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/boot2docker.iso'/>
	I0422 17:17:49.425516   30338 main.go:141] libmachine: (ha-025067-m02)       <target dev='hdc' bus='scsi'/>
	I0422 17:17:49.425528   30338 main.go:141] libmachine: (ha-025067-m02)       <readonly/>
	I0422 17:17:49.425537   30338 main.go:141] libmachine: (ha-025067-m02)     </disk>
	I0422 17:17:49.425549   30338 main.go:141] libmachine: (ha-025067-m02)     <disk type='file' device='disk'>
	I0422 17:17:49.425562   30338 main.go:141] libmachine: (ha-025067-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 17:17:49.425583   30338 main.go:141] libmachine: (ha-025067-m02)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/ha-025067-m02.rawdisk'/>
	I0422 17:17:49.425599   30338 main.go:141] libmachine: (ha-025067-m02)       <target dev='hda' bus='virtio'/>
	I0422 17:17:49.425610   30338 main.go:141] libmachine: (ha-025067-m02)     </disk>
	I0422 17:17:49.425622   30338 main.go:141] libmachine: (ha-025067-m02)     <interface type='network'>
	I0422 17:17:49.425636   30338 main.go:141] libmachine: (ha-025067-m02)       <source network='mk-ha-025067'/>
	I0422 17:17:49.425652   30338 main.go:141] libmachine: (ha-025067-m02)       <model type='virtio'/>
	I0422 17:17:49.425664   30338 main.go:141] libmachine: (ha-025067-m02)     </interface>
	I0422 17:17:49.425676   30338 main.go:141] libmachine: (ha-025067-m02)     <interface type='network'>
	I0422 17:17:49.425689   30338 main.go:141] libmachine: (ha-025067-m02)       <source network='default'/>
	I0422 17:17:49.425697   30338 main.go:141] libmachine: (ha-025067-m02)       <model type='virtio'/>
	I0422 17:17:49.425710   30338 main.go:141] libmachine: (ha-025067-m02)     </interface>
	I0422 17:17:49.425723   30338 main.go:141] libmachine: (ha-025067-m02)     <serial type='pty'>
	I0422 17:17:49.425742   30338 main.go:141] libmachine: (ha-025067-m02)       <target port='0'/>
	I0422 17:17:49.425752   30338 main.go:141] libmachine: (ha-025067-m02)     </serial>
	I0422 17:17:49.425831   30338 main.go:141] libmachine: (ha-025067-m02)     <console type='pty'>
	I0422 17:17:49.425853   30338 main.go:141] libmachine: (ha-025067-m02)       <target type='serial' port='0'/>
	I0422 17:17:49.425862   30338 main.go:141] libmachine: (ha-025067-m02)     </console>
	I0422 17:17:49.425873   30338 main.go:141] libmachine: (ha-025067-m02)     <rng model='virtio'>
	I0422 17:17:49.425897   30338 main.go:141] libmachine: (ha-025067-m02)       <backend model='random'>/dev/random</backend>
	I0422 17:17:49.425921   30338 main.go:141] libmachine: (ha-025067-m02)     </rng>
	I0422 17:17:49.425954   30338 main.go:141] libmachine: (ha-025067-m02)     
	I0422 17:17:49.425965   30338 main.go:141] libmachine: (ha-025067-m02)     
	I0422 17:17:49.425975   30338 main.go:141] libmachine: (ha-025067-m02)   </devices>
	I0422 17:17:49.425984   30338 main.go:141] libmachine: (ha-025067-m02) </domain>
	I0422 17:17:49.426001   30338 main.go:141] libmachine: (ha-025067-m02) 
	I0422 17:17:49.432608   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:b0:45:d4 in network default
	I0422 17:17:49.433146   30338 main.go:141] libmachine: (ha-025067-m02) Ensuring networks are active...
	I0422 17:17:49.433206   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:49.433821   30338 main.go:141] libmachine: (ha-025067-m02) Ensuring network default is active
	I0422 17:17:49.434145   30338 main.go:141] libmachine: (ha-025067-m02) Ensuring network mk-ha-025067 is active
	I0422 17:17:49.434613   30338 main.go:141] libmachine: (ha-025067-m02) Getting domain xml...
	I0422 17:17:49.435370   30338 main.go:141] libmachine: (ha-025067-m02) Creating domain...
	I0422 17:17:50.666188   30338 main.go:141] libmachine: (ha-025067-m02) Waiting to get IP...
	I0422 17:17:50.667102   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:50.667502   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:50.667561   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:50.667491   31195 retry.go:31] will retry after 301.277138ms: waiting for machine to come up
	I0422 17:17:50.970032   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:50.970425   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:50.970476   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:50.970385   31195 retry.go:31] will retry after 336.847099ms: waiting for machine to come up
	I0422 17:17:51.309141   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:51.309579   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:51.309603   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:51.309517   31195 retry.go:31] will retry after 293.927768ms: waiting for machine to come up
	I0422 17:17:51.605249   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:51.605761   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:51.605784   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:51.605714   31195 retry.go:31] will retry after 379.885385ms: waiting for machine to come up
	I0422 17:17:51.987196   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:51.987549   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:51.987570   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:51.987520   31195 retry.go:31] will retry after 520.525548ms: waiting for machine to come up
	I0422 17:17:52.509209   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:52.509674   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:52.509697   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:52.509635   31195 retry.go:31] will retry after 711.500166ms: waiting for machine to come up
	I0422 17:17:53.222388   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:53.222875   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:53.222911   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:53.222805   31195 retry.go:31] will retry after 831.419751ms: waiting for machine to come up
	I0422 17:17:54.057220   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:54.057699   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:54.057746   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:54.057651   31195 retry.go:31] will retry after 1.278962374s: waiting for machine to come up
	I0422 17:17:55.338427   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:55.339058   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:55.339086   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:55.339005   31195 retry.go:31] will retry after 1.432428767s: waiting for machine to come up
	I0422 17:17:56.773315   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:56.773745   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:56.773771   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:56.773708   31195 retry.go:31] will retry after 1.431656718s: waiting for machine to come up
	I0422 17:17:58.206743   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:58.207257   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:58.207287   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:58.207208   31195 retry.go:31] will retry after 1.95615804s: waiting for machine to come up
	I0422 17:18:00.165373   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:00.165998   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:18:00.166025   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:18:00.165961   31195 retry.go:31] will retry after 2.219203379s: waiting for machine to come up
	I0422 17:18:02.388264   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:02.388717   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:18:02.388746   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:18:02.388639   31195 retry.go:31] will retry after 3.64058761s: waiting for machine to come up
	I0422 17:18:06.031722   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:06.032202   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:18:06.032232   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:18:06.032159   31195 retry.go:31] will retry after 5.444187126s: waiting for machine to come up
	I0422 17:18:11.479729   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.480279   30338 main.go:141] libmachine: (ha-025067-m02) Found IP for machine: 192.168.39.56
	I0422 17:18:11.480303   30338 main.go:141] libmachine: (ha-025067-m02) Reserving static IP address...
	I0422 17:18:11.480318   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has current primary IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.480642   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find host DHCP lease matching {name: "ha-025067-m02", mac: "52:54:00:f3:68:d1", ip: "192.168.39.56"} in network mk-ha-025067
	I0422 17:18:11.550688   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Getting to WaitForSSH function...
	I0422 17:18:11.550718   30338 main.go:141] libmachine: (ha-025067-m02) Reserved static IP address: 192.168.39.56
	I0422 17:18:11.550747   30338 main.go:141] libmachine: (ha-025067-m02) Waiting for SSH to be available...
	I0422 17:18:11.553229   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.553630   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.553658   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.554001   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Using SSH client type: external
	I0422 17:18:11.554039   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa (-rw-------)
	I0422 17:18:11.554067   30338 main.go:141] libmachine: (ha-025067-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 17:18:11.554081   30338 main.go:141] libmachine: (ha-025067-m02) DBG | About to run SSH command:
	I0422 17:18:11.554094   30338 main.go:141] libmachine: (ha-025067-m02) DBG | exit 0
	I0422 17:18:11.679377   30338 main.go:141] libmachine: (ha-025067-m02) DBG | SSH cmd err, output: <nil>: 
	I0422 17:18:11.679618   30338 main.go:141] libmachine: (ha-025067-m02) KVM machine creation complete!
	I0422 17:18:11.679935   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetConfigRaw
	I0422 17:18:11.680482   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:11.680683   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:11.680837   30338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 17:18:11.680851   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:18:11.682080   30338 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 17:18:11.682098   30338 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 17:18:11.682105   30338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 17:18:11.682114   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:11.684353   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.684726   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.684755   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.684907   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:11.685071   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.685254   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.685415   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:11.685582   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:11.685773   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:11.685785   30338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 17:18:11.794800   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:18:11.794822   30338 main.go:141] libmachine: Detecting the provisioner...
	I0422 17:18:11.794828   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:11.797776   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.798206   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.798245   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.798382   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:11.798584   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.798743   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.798903   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:11.799169   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:11.799391   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:11.799410   30338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 17:18:11.907877   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 17:18:11.907954   30338 main.go:141] libmachine: found compatible host: buildroot
	I0422 17:18:11.907967   30338 main.go:141] libmachine: Provisioning with buildroot...
	I0422 17:18:11.907978   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:18:11.908248   30338 buildroot.go:166] provisioning hostname "ha-025067-m02"
	I0422 17:18:11.908271   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:18:11.908448   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:11.911106   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.911484   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.911517   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.911646   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:11.911837   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.911987   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.912142   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:11.912294   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:11.912525   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:11.912549   30338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067-m02 && echo "ha-025067-m02" | sudo tee /etc/hostname
	I0422 17:18:12.035161   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067-m02
	
	I0422 17:18:12.035190   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.037839   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.038117   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.038157   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.038312   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.038574   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.038754   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.038930   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.039092   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:12.039396   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:12.039424   30338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:18:12.157161   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:18:12.157193   30338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:18:12.157210   30338 buildroot.go:174] setting up certificates
	I0422 17:18:12.157221   30338 provision.go:84] configureAuth start
	I0422 17:18:12.157233   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:18:12.157506   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:12.160150   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.160512   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.160540   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.160713   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.162801   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.163089   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.163108   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.163267   30338 provision.go:143] copyHostCerts
	I0422 17:18:12.163294   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:18:12.163330   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:18:12.163343   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:18:12.163429   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:18:12.163530   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:18:12.163554   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:18:12.163561   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:18:12.163588   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:18:12.163637   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:18:12.163653   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:18:12.163656   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:18:12.163676   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:18:12.163718   30338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067-m02 san=[127.0.0.1 192.168.39.56 ha-025067-m02 localhost minikube]
	I0422 17:18:12.318423   30338 provision.go:177] copyRemoteCerts
	I0422 17:18:12.318475   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:18:12.318503   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.321344   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.321682   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.321723   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.321859   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.322043   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.322178   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.322358   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:12.406005   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:18:12.406072   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:18:12.431931   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:18:12.432041   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 17:18:12.456622   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:18:12.456683   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 17:18:12.482344   30338 provision.go:87] duration metric: took 325.111637ms to configureAuth
	I0422 17:18:12.482368   30338 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:18:12.482570   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:18:12.482649   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.485568   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.486114   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.486143   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.486309   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.486485   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.486652   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.486795   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.486947   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:12.487112   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:12.487151   30338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:18:12.759364   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:18:12.759394   30338 main.go:141] libmachine: Checking connection to Docker...
	I0422 17:18:12.759404   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetURL
	I0422 17:18:12.760603   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Using libvirt version 6000000
	I0422 17:18:12.762733   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.763029   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.763070   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.763218   30338 main.go:141] libmachine: Docker is up and running!
	I0422 17:18:12.763240   30338 main.go:141] libmachine: Reticulating splines...
	I0422 17:18:12.763247   30338 client.go:171] duration metric: took 23.693681012s to LocalClient.Create
	I0422 17:18:12.763278   30338 start.go:167] duration metric: took 23.693749721s to libmachine.API.Create "ha-025067"
	I0422 17:18:12.763288   30338 start.go:293] postStartSetup for "ha-025067-m02" (driver="kvm2")
	I0422 17:18:12.763298   30338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:18:12.763314   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:12.763556   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:18:12.763577   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.765458   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.765721   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.765748   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.765899   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.766068   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.766210   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.766321   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:12.851971   30338 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:18:12.856551   30338 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:18:12.856574   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:18:12.856626   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:18:12.856689   30338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:18:12.856700   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:18:12.856777   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:18:12.866934   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:18:12.892487   30338 start.go:296] duration metric: took 129.185273ms for postStartSetup
	I0422 17:18:12.892548   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetConfigRaw
	I0422 17:18:12.893179   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:12.895712   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.896057   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.896088   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.896343   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:18:12.896525   30338 start.go:128] duration metric: took 23.845835741s to createHost
	I0422 17:18:12.896550   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.898898   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.899256   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.899276   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.899464   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.899631   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.899749   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.899839   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.900026   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:12.900214   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:12.900229   30338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:18:13.008347   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806292.979382927
	
	I0422 17:18:13.008366   30338 fix.go:216] guest clock: 1713806292.979382927
	I0422 17:18:13.008373   30338 fix.go:229] Guest: 2024-04-22 17:18:12.979382927 +0000 UTC Remote: 2024-04-22 17:18:12.896537372 +0000 UTC m=+80.403805215 (delta=82.845555ms)
	I0422 17:18:13.008387   30338 fix.go:200] guest clock delta is within tolerance: 82.845555ms
	I0422 17:18:13.008391   30338 start.go:83] releasing machines lock for "ha-025067-m02", held for 23.957790272s
	I0422 17:18:13.008406   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.008671   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:13.011031   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.011471   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:13.011501   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.013712   30338 out.go:177] * Found network options:
	I0422 17:18:13.015255   30338 out.go:177]   - NO_PROXY=192.168.39.22
	W0422 17:18:13.016682   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:18:13.016711   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.017188   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.017361   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.017457   30338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:18:13.017491   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	W0422 17:18:13.017567   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:18:13.017625   30338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:18:13.017640   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:13.020234   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020346   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020583   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:13.020611   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020741   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:13.020828   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:13.020860   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020911   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:13.020990   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:13.021066   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:13.021136   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:13.021201   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:13.021247   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:13.021372   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:13.260969   30338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:18:13.267814   30338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:18:13.267886   30338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:18:13.284474   30338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 17:18:13.284503   30338 start.go:494] detecting cgroup driver to use...
	I0422 17:18:13.284577   30338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:18:13.301433   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:18:13.316405   30338 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:18:13.316458   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:18:13.331387   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:18:13.346005   30338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:18:13.471058   30338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:18:13.632838   30338 docker.go:233] disabling docker service ...
	I0422 17:18:13.632909   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:18:13.647289   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:18:13.660863   30338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:18:13.793794   30338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:18:13.944560   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:18:13.959303   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:18:13.979204   30338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:18:13.979272   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:13.990469   30338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:18:13.990522   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.001643   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.012754   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.023974   30338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:18:14.035655   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.048156   30338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.067227   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.079560   30338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:18:14.090879   30338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 17:18:14.090941   30338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 17:18:14.105159   30338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:18:14.116709   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:18:14.243784   30338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:18:14.387806   30338 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:18:14.387882   30338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:18:14.394211   30338 start.go:562] Will wait 60s for crictl version
	I0422 17:18:14.394286   30338 ssh_runner.go:195] Run: which crictl
	I0422 17:18:14.398222   30338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:18:14.435419   30338 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:18:14.435502   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:18:14.465373   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:18:14.498246   30338 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:18:14.499750   30338 out.go:177]   - env NO_PROXY=192.168.39.22
	I0422 17:18:14.501194   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:14.503676   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:14.504065   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:14.504096   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:14.504367   30338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:18:14.509086   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:18:14.525938   30338 mustload.go:65] Loading cluster: ha-025067
	I0422 17:18:14.526231   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:18:14.526625   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:18:14.526683   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:18:14.541354   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0422 17:18:14.541793   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:18:14.542270   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:18:14.542292   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:18:14.542641   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:18:14.542784   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:18:14.544411   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:18:14.544687   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:18:14.544720   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:18:14.558909   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0422 17:18:14.559288   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:18:14.559734   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:18:14.559754   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:18:14.560057   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:18:14.560230   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:18:14.560411   30338 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.56
	I0422 17:18:14.560423   30338 certs.go:194] generating shared ca certs ...
	I0422 17:18:14.560441   30338 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:18:14.560558   30338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:18:14.560593   30338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:18:14.560602   30338 certs.go:256] generating profile certs ...
	I0422 17:18:14.560661   30338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:18:14.560684   30338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db
	I0422 17:18:14.560698   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.56 192.168.39.254]
	I0422 17:18:14.748385   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db ...
	I0422 17:18:14.748418   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db: {Name:mkb92a6fdff09c9dea3d22aedf18d5db4bbbc5e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:18:14.748613   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db ...
	I0422 17:18:14.748631   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db: {Name:mkb907f809d28a0e996ba56e8d5ef1ee7be2bc57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:18:14.748731   30338 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:18:14.748899   30338 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:18:14.749065   30338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:18:14.749085   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:18:14.749103   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:18:14.749120   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:18:14.749139   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:18:14.749172   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:18:14.749205   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:18:14.749225   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:18:14.749243   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:18:14.749302   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:18:14.749342   30338 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:18:14.749357   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:18:14.749393   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:18:14.749424   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:18:14.749454   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:18:14.749514   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:18:14.749556   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:18:14.749576   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:14.749595   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:18:14.749634   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:18:14.752745   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:14.753127   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:18:14.753153   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:14.753301   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:18:14.753562   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:18:14.753707   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:18:14.753837   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:18:14.831489   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 17:18:14.838744   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 17:18:14.853986   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 17:18:14.858936   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 17:18:14.872716   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 17:18:14.878795   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 17:18:14.892467   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 17:18:14.897410   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 17:18:14.908528   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 17:18:14.913020   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 17:18:14.926200   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 17:18:14.931479   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0422 17:18:14.943951   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:18:14.969496   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:18:14.993368   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:18:15.017283   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:18:15.041614   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 17:18:15.066195   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 17:18:15.093045   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:18:15.119452   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:18:15.145896   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:18:15.172473   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:18:15.197815   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:18:15.223804   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 17:18:15.242470   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 17:18:15.260737   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 17:18:15.278450   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 17:18:15.295819   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 17:18:15.314207   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0422 17:18:15.333047   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 17:18:15.351644   30338 ssh_runner.go:195] Run: openssl version
	I0422 17:18:15.357445   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:18:15.369147   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:18:15.373944   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:18:15.373997   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:18:15.379931   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:18:15.391585   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:18:15.403104   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:18:15.407990   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:18:15.408037   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:18:15.414011   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:18:15.426928   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:18:15.440211   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:15.445267   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:15.445327   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:15.451531   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:18:15.463491   30338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:18:15.467879   30338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 17:18:15.467926   30338 kubeadm.go:928] updating node {m02 192.168.39.56 8443 v1.30.0 crio true true} ...
	I0422 17:18:15.468000   30338 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:18:15.468023   30338 kube-vip.go:111] generating kube-vip config ...
	I0422 17:18:15.468058   30338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:18:15.485053   30338 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:18:15.485118   30338 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:18:15.485175   30338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:18:15.496207   30338 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 17:18:15.496273   30338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 17:18:15.508457   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 17:18:15.508484   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:18:15.508543   30338 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0422 17:18:15.508574   30338 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0422 17:18:15.508554   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:18:15.513360   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 17:18:15.513388   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 17:18:16.338624   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:18:16.338695   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:18:16.344483   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 17:18:16.344518   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 17:18:16.661755   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:18:16.678827   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:18:16.678921   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:18:16.683453   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 17:18:16.683481   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 17:18:17.126421   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 17:18:17.136530   30338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 17:18:17.154383   30338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:18:17.172209   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 17:18:17.190007   30338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:18:17.194247   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:18:17.207348   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:18:17.331117   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:18:17.348555   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:18:17.349012   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:18:17.349068   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:18:17.363594   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0422 17:18:17.364056   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:18:17.364557   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:18:17.364581   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:18:17.364874   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:18:17.365050   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:18:17.365157   30338 start.go:316] joinCluster: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:18:17.365276   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 17:18:17.365289   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:18:17.368150   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:17.368603   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:18:17.368635   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:17.368785   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:18:17.368960   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:18:17.369205   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:18:17.369361   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:18:17.627712   30338 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:18:17.627758   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pcnw2.r0uhk8w13xxqxqvc --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m02 --control-plane --apiserver-advertise-address=192.168.39.56 --apiserver-bind-port=8443"
	I0422 17:18:40.513292   30338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pcnw2.r0uhk8w13xxqxqvc --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m02 --control-plane --apiserver-advertise-address=192.168.39.56 --apiserver-bind-port=8443": (22.885505336s)
	I0422 17:18:40.513329   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 17:18:41.124919   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-025067-m02 minikube.k8s.io/updated_at=2024_04_22T17_18_41_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=ha-025067 minikube.k8s.io/primary=false
	I0422 17:18:41.286158   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-025067-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 17:18:41.415448   30338 start.go:318] duration metric: took 24.050283777s to joinCluster
	I0422 17:18:41.415532   30338 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:18:41.417217   30338 out.go:177] * Verifying Kubernetes components...
	I0422 17:18:41.415817   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:18:41.418824   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:18:41.719944   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:18:41.801607   30338 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:18:41.801821   30338 kapi.go:59] client config for ha-025067: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 17:18:41.801880   30338 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.22:8443
	I0422 17:18:41.802044   30338 node_ready.go:35] waiting up to 6m0s for node "ha-025067-m02" to be "Ready" ...
	I0422 17:18:41.802127   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:41.802135   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:41.802143   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:41.802146   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:41.813295   30338 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0422 17:18:42.303296   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:42.303320   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:42.303330   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:42.303336   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:42.306497   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:42.802979   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:42.803005   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:42.803017   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:42.803024   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:42.807306   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:43.303029   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:43.303049   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:43.303057   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:43.303060   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:43.307003   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:43.803281   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:43.803327   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:43.803335   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:43.803339   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:43.812036   30338 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 17:18:43.812788   30338 node_ready.go:53] node "ha-025067-m02" has status "Ready":"False"
	I0422 17:18:44.302366   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:44.302388   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:44.302396   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:44.302401   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:44.306400   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:44.802285   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:44.802312   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:44.802323   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:44.802327   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:44.806236   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:45.303263   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:45.303286   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:45.303294   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:45.303298   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:45.306644   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:45.802342   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:45.802375   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:45.802387   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:45.802392   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:45.805794   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:46.303159   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:46.303184   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:46.303192   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:46.303195   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:46.307759   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:46.308529   30338 node_ready.go:53] node "ha-025067-m02" has status "Ready":"False"
	I0422 17:18:46.803027   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:46.803058   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:46.803068   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:46.803074   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:46.806330   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:47.303231   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:47.303268   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:47.303276   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:47.303281   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:47.307625   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:47.802259   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:47.802314   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:47.802325   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:47.802334   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:47.805691   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.303154   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:48.303181   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.303192   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.303196   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.306489   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.802850   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:48.802876   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.802888   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.802896   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.809475   30338 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 17:18:48.810085   30338 node_ready.go:49] node "ha-025067-m02" has status "Ready":"True"
	I0422 17:18:48.810116   30338 node_ready.go:38] duration metric: took 7.008048903s for node "ha-025067-m02" to be "Ready" ...
	I0422 17:18:48.810131   30338 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:18:48.810232   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:48.810243   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.810250   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.810254   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.815491   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:48.822373   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.822467   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nswqp
	I0422 17:18:48.822483   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.822494   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.822499   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.825459   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.826264   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:48.826280   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.826286   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.826291   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.829367   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.830213   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:48.830232   30338 pod_ready.go:81] duration metric: took 7.833056ms for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.830241   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.830289   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vrl4h
	I0422 17:18:48.830298   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.830305   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.830310   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.833234   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.834049   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:48.834062   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.834070   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.834076   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.837356   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.837820   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:48.837840   30338 pod_ready.go:81] duration metric: took 7.592161ms for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.837852   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.837913   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067
	I0422 17:18:48.837924   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.837933   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.837940   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.840217   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.840844   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:48.840862   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.840871   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.840875   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.842868   30338 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 17:18:48.843420   30338 pod_ready.go:92] pod "etcd-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:48.843435   30338 pod_ready.go:81] duration metric: took 5.575474ms for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.843442   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.843496   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:48.843504   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.843510   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.843517   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.846010   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.846815   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:48.846830   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.846836   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.846840   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.848962   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:49.344421   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:49.344450   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.344461   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.344468   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.409699   30338 round_trippers.go:574] Response Status: 200 OK in 65 milliseconds
	I0422 17:18:49.410869   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:49.410889   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.410896   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.410900   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.413510   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:49.844304   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:49.844327   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.844334   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.844341   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.847973   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:49.848836   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:49.848851   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.848857   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.848861   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.851243   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:50.344106   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:50.344129   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.344138   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.344155   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.347503   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:50.348114   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:50.348128   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.348135   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.348140   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.350729   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:50.843724   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:50.843745   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.843754   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.843757   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.847358   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:50.848008   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:50.848029   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.848039   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.848044   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.850968   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:50.851859   30338 pod_ready.go:102] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 17:18:51.344344   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:51.344367   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.344374   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.344379   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.348369   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:51.349069   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:51.349085   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.349095   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.349102   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.351896   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:51.843712   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:51.843734   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.843742   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.843745   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.847100   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:51.848256   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:51.848275   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.848286   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.848290   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.851228   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:52.344416   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:52.344448   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.344459   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.344464   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.348046   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:52.348910   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:52.348925   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.348935   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.348940   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.352502   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:52.843652   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:52.843681   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.843692   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.843697   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.846959   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:52.847851   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:52.847872   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.847882   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.847887   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.852101   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:52.852823   30338 pod_ready.go:102] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 17:18:53.344307   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:53.344330   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.344351   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.344356   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.347668   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:53.348336   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:53.348350   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.348357   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.348361   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.350905   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.351681   30338 pod_ready.go:92] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.351699   30338 pod_ready.go:81] duration metric: took 4.508251275s for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.351712   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.351763   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067
	I0422 17:18:53.351770   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.351777   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.351783   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.354640   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.355363   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:53.355382   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.355389   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.355392   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.357695   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.358179   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.358196   30338 pod_ready.go:81] duration metric: took 6.478929ms for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.358204   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.358242   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m02
	I0422 17:18:53.358246   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.358253   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.358257   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.360805   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.361356   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:53.361370   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.361376   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.361379   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.363591   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.364035   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.364056   30338 pod_ready.go:81] duration metric: took 5.842627ms for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.364064   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.403397   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067
	I0422 17:18:53.403419   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.403434   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.403438   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.406511   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:53.603464   30338 request.go:629] Waited for 196.351505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:53.603544   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:53.603552   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.603562   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.603569   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.606668   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:53.607255   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.607272   30338 pod_ready.go:81] duration metric: took 243.202638ms for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.607281   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.803734   30338 request.go:629] Waited for 196.394465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:53.803808   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:53.803814   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.803822   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.803828   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.808005   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:54.003359   30338 request.go:629] Waited for 194.356848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.003417   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.003421   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.003429   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.003433   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.007606   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:54.203188   30338 request.go:629] Waited for 95.250873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:54.203244   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:54.203249   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.203256   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.203260   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.207278   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:54.403755   30338 request.go:629] Waited for 195.431105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.403831   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.403857   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.403870   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.403879   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.407442   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:54.608047   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:54.608074   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.608084   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.608090   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.615023   30338 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 17:18:54.803345   30338 request.go:629] Waited for 187.367934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.803412   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.803420   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.803427   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.803433   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.806995   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.108127   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:55.108157   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.108166   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.108173   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.111186   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:55.203259   30338 request.go:629] Waited for 91.300258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:55.203348   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:55.203356   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.203365   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.203373   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.207365   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.608435   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:55.608461   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.608469   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.608472   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.611909   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.612984   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:55.613002   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.613011   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.613017   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.616188   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.616950   30338 pod_ready.go:102] pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 17:18:56.108277   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:56.108304   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.108317   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.108322   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.113671   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:56.114804   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:56.114818   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.114826   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.114830   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.119275   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:56.120045   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:56.120063   30338 pod_ready.go:81] duration metric: took 2.512776333s for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.120073   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.203399   30338 request.go:629] Waited for 83.267456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:18:56.203514   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:18:56.203523   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.203531   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.203534   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.206848   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:56.403282   30338 request.go:629] Waited for 195.389862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:56.403337   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:56.403347   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.403354   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.403358   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.407782   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:56.408681   30338 pod_ready.go:92] pod "kube-proxy-dk5ww" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:56.408699   30338 pod_ready.go:81] duration metric: took 288.619685ms for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.408708   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.603167   30338 request.go:629] Waited for 194.396956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:18:56.603223   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:18:56.603238   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.603245   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.603249   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.606228   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:56.803462   30338 request.go:629] Waited for 196.39236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:56.803516   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:56.803521   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.803528   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.803532   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.807268   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:56.808023   30338 pod_ready.go:92] pod "kube-proxy-pf7cc" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:56.808043   30338 pod_ready.go:81] duration metric: took 399.329212ms for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.808052   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:57.003146   30338 request.go:629] Waited for 195.007817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:18:57.003226   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:18:57.003235   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.003244   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.003249   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.007292   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:57.203609   30338 request.go:629] Waited for 195.392162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:57.203694   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:57.203702   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.203714   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.203728   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.207763   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:57.209178   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:57.209198   30338 pod_ready.go:81] duration metric: took 401.138629ms for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:57.209217   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:57.403804   30338 request.go:629] Waited for 194.516914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.403870   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.403878   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.403887   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.403893   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.407542   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:57.603053   30338 request.go:629] Waited for 194.24827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:57.603109   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:57.603153   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.603165   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.603169   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.605891   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:57.803089   30338 request.go:629] Waited for 93.263261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.803177   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.803186   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.803193   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.803197   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.806591   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.003745   30338 request.go:629] Waited for 196.38252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.003822   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.003830   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.003841   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.003850   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.007525   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.209401   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:58.209423   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.209431   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.209435   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.212551   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.403558   30338 request.go:629] Waited for 190.383656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.403646   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.403654   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.403661   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.403668   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.407607   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.710412   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:58.710442   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.710450   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.710453   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.714043   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.802904   30338 request.go:629] Waited for 88.185936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.802976   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.802981   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.802988   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.802997   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.806875   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:59.210350   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:59.210373   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.210384   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.210389   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.213785   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:59.214661   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:59.214675   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.214683   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.214689   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.217426   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:59.218276   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:59.218294   30338 pod_ready.go:81] duration metric: took 2.00906977s for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:59.218304   30338 pod_ready.go:38] duration metric: took 10.408148984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:18:59.218317   30338 api_server.go:52] waiting for apiserver process to appear ...
	I0422 17:18:59.218366   30338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:18:59.235446   30338 api_server.go:72] duration metric: took 17.819876382s to wait for apiserver process to appear ...
	I0422 17:18:59.235477   30338 api_server.go:88] waiting for apiserver healthz status ...
	I0422 17:18:59.235499   30338 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0422 17:18:59.239726   30338 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0422 17:18:59.239804   30338 round_trippers.go:463] GET https://192.168.39.22:8443/version
	I0422 17:18:59.239812   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.239827   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.239836   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.240823   30338 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 17:18:59.240919   30338 api_server.go:141] control plane version: v1.30.0
	I0422 17:18:59.240937   30338 api_server.go:131] duration metric: took 5.451788ms to wait for apiserver health ...
	I0422 17:18:59.240947   30338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 17:18:59.403321   30338 request.go:629] Waited for 162.311156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.403429   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.403439   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.403447   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.403461   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.409326   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:59.416517   30338 system_pods.go:59] 17 kube-system pods found
	I0422 17:18:59.416558   30338 system_pods.go:61] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:18:59.416572   30338 system_pods.go:61] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:18:59.416576   30338 system_pods.go:61] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:18:59.416579   30338 system_pods.go:61] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:18:59.416582   30338 system_pods.go:61] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:18:59.416585   30338 system_pods.go:61] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:18:59.416588   30338 system_pods.go:61] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:18:59.416591   30338 system_pods.go:61] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:18:59.416594   30338 system_pods.go:61] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:18:59.416597   30338 system_pods.go:61] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:18:59.416602   30338 system_pods.go:61] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:18:59.416606   30338 system_pods.go:61] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:18:59.416611   30338 system_pods.go:61] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:18:59.416630   30338 system_pods.go:61] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:18:59.416634   30338 system_pods.go:61] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:18:59.416638   30338 system_pods.go:61] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:18:59.416640   30338 system_pods.go:61] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:18:59.416647   30338 system_pods.go:74] duration metric: took 175.690311ms to wait for pod list to return data ...
	I0422 17:18:59.416656   30338 default_sa.go:34] waiting for default service account to be created ...
	I0422 17:18:59.603182   30338 request.go:629] Waited for 186.452528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:18:59.603242   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:18:59.603249   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.603258   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.603264   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.608475   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:59.608696   30338 default_sa.go:45] found service account: "default"
	I0422 17:18:59.608712   30338 default_sa.go:55] duration metric: took 192.047543ms for default service account to be created ...
	I0422 17:18:59.608720   30338 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 17:18:59.803194   30338 request.go:629] Waited for 194.417195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.803251   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.803256   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.803263   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.803266   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.809180   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:59.813316   30338 system_pods.go:86] 17 kube-system pods found
	I0422 17:18:59.813345   30338 system_pods.go:89] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:18:59.813350   30338 system_pods.go:89] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:18:59.813355   30338 system_pods.go:89] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:18:59.813358   30338 system_pods.go:89] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:18:59.813363   30338 system_pods.go:89] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:18:59.813367   30338 system_pods.go:89] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:18:59.813370   30338 system_pods.go:89] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:18:59.813374   30338 system_pods.go:89] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:18:59.813377   30338 system_pods.go:89] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:18:59.813381   30338 system_pods.go:89] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:18:59.813385   30338 system_pods.go:89] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:18:59.813389   30338 system_pods.go:89] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:18:59.813392   30338 system_pods.go:89] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:18:59.813396   30338 system_pods.go:89] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:18:59.813399   30338 system_pods.go:89] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:18:59.813402   30338 system_pods.go:89] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:18:59.813405   30338 system_pods.go:89] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:18:59.813411   30338 system_pods.go:126] duration metric: took 204.687482ms to wait for k8s-apps to be running ...
	I0422 17:18:59.813420   30338 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 17:18:59.813465   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:18:59.829589   30338 system_svc.go:56] duration metric: took 16.160392ms WaitForService to wait for kubelet
	I0422 17:18:59.829616   30338 kubeadm.go:576] duration metric: took 18.414051448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:18:59.829634   30338 node_conditions.go:102] verifying NodePressure condition ...
	I0422 17:19:00.002907   30338 request.go:629] Waited for 173.204088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes
	I0422 17:19:00.002991   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes
	I0422 17:19:00.002998   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:00.003008   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:00.003016   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:00.006533   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:19:00.007192   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:19:00.007213   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:19:00.007226   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:19:00.007231   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:19:00.007236   30338 node_conditions.go:105] duration metric: took 177.597848ms to run NodePressure ...
	I0422 17:19:00.007250   30338 start.go:240] waiting for startup goroutines ...
	I0422 17:19:00.007277   30338 start.go:254] writing updated cluster config ...
	I0422 17:19:00.010089   30338 out.go:177] 
	I0422 17:19:00.011879   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:00.011986   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:19:00.013767   30338 out.go:177] * Starting "ha-025067-m03" control-plane node in "ha-025067" cluster
	I0422 17:19:00.014985   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:19:00.015009   30338 cache.go:56] Caching tarball of preloaded images
	I0422 17:19:00.015114   30338 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:19:00.015141   30338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:19:00.015243   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:19:00.015426   30338 start.go:360] acquireMachinesLock for ha-025067-m03: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:19:00.015487   30338 start.go:364] duration metric: took 39.538µs to acquireMachinesLock for "ha-025067-m03"
	I0422 17:19:00.015511   30338 start.go:93] Provisioning new machine with config: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:19:00.015619   30338 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0422 17:19:00.017385   30338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 17:19:00.017486   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:00.017526   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:00.032459   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0422 17:19:00.032874   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:00.033374   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:00.033393   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:00.033721   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:00.033907   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:00.034008   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:00.034222   30338 start.go:159] libmachine.API.Create for "ha-025067" (driver="kvm2")
	I0422 17:19:00.034270   30338 client.go:168] LocalClient.Create starting
	I0422 17:19:00.034314   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 17:19:00.034355   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:19:00.034374   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:19:00.034438   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 17:19:00.034466   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:19:00.034482   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:19:00.034510   30338 main.go:141] libmachine: Running pre-create checks...
	I0422 17:19:00.034521   30338 main.go:141] libmachine: (ha-025067-m03) Calling .PreCreateCheck
	I0422 17:19:00.034759   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetConfigRaw
	I0422 17:19:00.035234   30338 main.go:141] libmachine: Creating machine...
	I0422 17:19:00.035252   30338 main.go:141] libmachine: (ha-025067-m03) Calling .Create
	I0422 17:19:00.035398   30338 main.go:141] libmachine: (ha-025067-m03) Creating KVM machine...
	I0422 17:19:00.036655   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found existing default KVM network
	I0422 17:19:00.036752   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found existing private KVM network mk-ha-025067
	I0422 17:19:00.036922   30338 main.go:141] libmachine: (ha-025067-m03) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03 ...
	I0422 17:19:00.036945   30338 main.go:141] libmachine: (ha-025067-m03) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 17:19:00.037001   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.036880   31595 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:19:00.037065   30338 main.go:141] libmachine: (ha-025067-m03) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 17:19:00.246743   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.246609   31595 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa...
	I0422 17:19:00.355574   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.355473   31595 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/ha-025067-m03.rawdisk...
	I0422 17:19:00.355598   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Writing magic tar header
	I0422 17:19:00.355609   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Writing SSH key tar header
	I0422 17:19:00.355617   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.355577   31595 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03 ...
	I0422 17:19:00.355676   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03
	I0422 17:19:00.355691   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 17:19:00.355700   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03 (perms=drwx------)
	I0422 17:19:00.355749   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:19:00.355774   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 17:19:00.355790   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 17:19:00.355805   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 17:19:00.355824   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 17:19:00.355834   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins
	I0422 17:19:00.355851   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 17:19:00.355865   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 17:19:00.355875   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home
	I0422 17:19:00.355892   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Skipping /home - not owner
	I0422 17:19:00.355908   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 17:19:00.355919   30338 main.go:141] libmachine: (ha-025067-m03) Creating domain...
	I0422 17:19:00.356771   30338 main.go:141] libmachine: (ha-025067-m03) define libvirt domain using xml: 
	I0422 17:19:00.356815   30338 main.go:141] libmachine: (ha-025067-m03) <domain type='kvm'>
	I0422 17:19:00.356831   30338 main.go:141] libmachine: (ha-025067-m03)   <name>ha-025067-m03</name>
	I0422 17:19:00.356847   30338 main.go:141] libmachine: (ha-025067-m03)   <memory unit='MiB'>2200</memory>
	I0422 17:19:00.356860   30338 main.go:141] libmachine: (ha-025067-m03)   <vcpu>2</vcpu>
	I0422 17:19:00.356870   30338 main.go:141] libmachine: (ha-025067-m03)   <features>
	I0422 17:19:00.356881   30338 main.go:141] libmachine: (ha-025067-m03)     <acpi/>
	I0422 17:19:00.356890   30338 main.go:141] libmachine: (ha-025067-m03)     <apic/>
	I0422 17:19:00.356898   30338 main.go:141] libmachine: (ha-025067-m03)     <pae/>
	I0422 17:19:00.356903   30338 main.go:141] libmachine: (ha-025067-m03)     
	I0422 17:19:00.356909   30338 main.go:141] libmachine: (ha-025067-m03)   </features>
	I0422 17:19:00.356913   30338 main.go:141] libmachine: (ha-025067-m03)   <cpu mode='host-passthrough'>
	I0422 17:19:00.356918   30338 main.go:141] libmachine: (ha-025067-m03)   
	I0422 17:19:00.356922   30338 main.go:141] libmachine: (ha-025067-m03)   </cpu>
	I0422 17:19:00.356965   30338 main.go:141] libmachine: (ha-025067-m03)   <os>
	I0422 17:19:00.356993   30338 main.go:141] libmachine: (ha-025067-m03)     <type>hvm</type>
	I0422 17:19:00.357027   30338 main.go:141] libmachine: (ha-025067-m03)     <boot dev='cdrom'/>
	I0422 17:19:00.357049   30338 main.go:141] libmachine: (ha-025067-m03)     <boot dev='hd'/>
	I0422 17:19:00.357063   30338 main.go:141] libmachine: (ha-025067-m03)     <bootmenu enable='no'/>
	I0422 17:19:00.357073   30338 main.go:141] libmachine: (ha-025067-m03)   </os>
	I0422 17:19:00.357088   30338 main.go:141] libmachine: (ha-025067-m03)   <devices>
	I0422 17:19:00.357099   30338 main.go:141] libmachine: (ha-025067-m03)     <disk type='file' device='cdrom'>
	I0422 17:19:00.357114   30338 main.go:141] libmachine: (ha-025067-m03)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/boot2docker.iso'/>
	I0422 17:19:00.357130   30338 main.go:141] libmachine: (ha-025067-m03)       <target dev='hdc' bus='scsi'/>
	I0422 17:19:00.357142   30338 main.go:141] libmachine: (ha-025067-m03)       <readonly/>
	I0422 17:19:00.357149   30338 main.go:141] libmachine: (ha-025067-m03)     </disk>
	I0422 17:19:00.357162   30338 main.go:141] libmachine: (ha-025067-m03)     <disk type='file' device='disk'>
	I0422 17:19:00.357175   30338 main.go:141] libmachine: (ha-025067-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 17:19:00.357190   30338 main.go:141] libmachine: (ha-025067-m03)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/ha-025067-m03.rawdisk'/>
	I0422 17:19:00.357206   30338 main.go:141] libmachine: (ha-025067-m03)       <target dev='hda' bus='virtio'/>
	I0422 17:19:00.357218   30338 main.go:141] libmachine: (ha-025067-m03)     </disk>
	I0422 17:19:00.357229   30338 main.go:141] libmachine: (ha-025067-m03)     <interface type='network'>
	I0422 17:19:00.357243   30338 main.go:141] libmachine: (ha-025067-m03)       <source network='mk-ha-025067'/>
	I0422 17:19:00.357251   30338 main.go:141] libmachine: (ha-025067-m03)       <model type='virtio'/>
	I0422 17:19:00.357263   30338 main.go:141] libmachine: (ha-025067-m03)     </interface>
	I0422 17:19:00.357277   30338 main.go:141] libmachine: (ha-025067-m03)     <interface type='network'>
	I0422 17:19:00.357290   30338 main.go:141] libmachine: (ha-025067-m03)       <source network='default'/>
	I0422 17:19:00.357301   30338 main.go:141] libmachine: (ha-025067-m03)       <model type='virtio'/>
	I0422 17:19:00.357313   30338 main.go:141] libmachine: (ha-025067-m03)     </interface>
	I0422 17:19:00.357330   30338 main.go:141] libmachine: (ha-025067-m03)     <serial type='pty'>
	I0422 17:19:00.357343   30338 main.go:141] libmachine: (ha-025067-m03)       <target port='0'/>
	I0422 17:19:00.357353   30338 main.go:141] libmachine: (ha-025067-m03)     </serial>
	I0422 17:19:00.357361   30338 main.go:141] libmachine: (ha-025067-m03)     <console type='pty'>
	I0422 17:19:00.357366   30338 main.go:141] libmachine: (ha-025067-m03)       <target type='serial' port='0'/>
	I0422 17:19:00.357392   30338 main.go:141] libmachine: (ha-025067-m03)     </console>
	I0422 17:19:00.357412   30338 main.go:141] libmachine: (ha-025067-m03)     <rng model='virtio'>
	I0422 17:19:00.357430   30338 main.go:141] libmachine: (ha-025067-m03)       <backend model='random'>/dev/random</backend>
	I0422 17:19:00.357446   30338 main.go:141] libmachine: (ha-025067-m03)     </rng>
	I0422 17:19:00.357461   30338 main.go:141] libmachine: (ha-025067-m03)     
	I0422 17:19:00.357474   30338 main.go:141] libmachine: (ha-025067-m03)     
	I0422 17:19:00.357487   30338 main.go:141] libmachine: (ha-025067-m03)   </devices>
	I0422 17:19:00.357497   30338 main.go:141] libmachine: (ha-025067-m03) </domain>
	I0422 17:19:00.357511   30338 main.go:141] libmachine: (ha-025067-m03) 
	I0422 17:19:00.365198   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:17:58:f5 in network default
	I0422 17:19:00.366022   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:00.366069   30338 main.go:141] libmachine: (ha-025067-m03) Ensuring networks are active...
	I0422 17:19:00.366777   30338 main.go:141] libmachine: (ha-025067-m03) Ensuring network default is active
	I0422 17:19:00.367175   30338 main.go:141] libmachine: (ha-025067-m03) Ensuring network mk-ha-025067 is active
	I0422 17:19:00.367662   30338 main.go:141] libmachine: (ha-025067-m03) Getting domain xml...
	I0422 17:19:00.368395   30338 main.go:141] libmachine: (ha-025067-m03) Creating domain...
	I0422 17:19:01.598047   30338 main.go:141] libmachine: (ha-025067-m03) Waiting to get IP...
	I0422 17:19:01.598841   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:01.599342   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:01.599379   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:01.599331   31595 retry.go:31] will retry after 244.474614ms: waiting for machine to come up
	I0422 17:19:01.845861   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:01.846396   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:01.846437   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:01.846349   31595 retry.go:31] will retry after 251.22244ms: waiting for machine to come up
	I0422 17:19:02.098746   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:02.099263   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:02.099291   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:02.099213   31595 retry.go:31] will retry after 295.500227ms: waiting for machine to come up
	I0422 17:19:02.396509   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:02.397019   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:02.397049   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:02.396975   31595 retry.go:31] will retry after 482.051032ms: waiting for machine to come up
	I0422 17:19:02.880143   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:02.880651   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:02.880684   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:02.880590   31595 retry.go:31] will retry after 711.029818ms: waiting for machine to come up
	I0422 17:19:03.593368   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:03.593807   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:03.593835   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:03.593755   31595 retry.go:31] will retry after 718.341687ms: waiting for machine to come up
	I0422 17:19:04.313375   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:04.313803   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:04.313886   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:04.313750   31595 retry.go:31] will retry after 747.746364ms: waiting for machine to come up
	I0422 17:19:05.063188   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:05.063669   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:05.063699   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:05.063636   31595 retry.go:31] will retry after 1.482792332s: waiting for machine to come up
	I0422 17:19:06.548134   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:06.548546   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:06.548580   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:06.548508   31595 retry.go:31] will retry after 1.591222295s: waiting for machine to come up
	I0422 17:19:08.141775   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:08.142271   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:08.142299   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:08.142226   31595 retry.go:31] will retry after 1.545760207s: waiting for machine to come up
	I0422 17:19:09.689109   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:09.689528   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:09.689559   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:09.689467   31595 retry.go:31] will retry after 2.68939632s: waiting for machine to come up
	I0422 17:19:12.380233   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:12.380565   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:12.380584   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:12.380538   31595 retry.go:31] will retry after 2.724038671s: waiting for machine to come up
	I0422 17:19:15.106266   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:15.106707   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:15.106730   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:15.106664   31595 retry.go:31] will retry after 3.963134485s: waiting for machine to come up
	I0422 17:19:19.074771   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:19.075307   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:19.075347   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:19.075256   31595 retry.go:31] will retry after 5.52357941s: waiting for machine to come up
	I0422 17:19:24.601566   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.602004   30338 main.go:141] libmachine: (ha-025067-m03) Found IP for machine: 192.168.39.220
	I0422 17:19:24.602021   30338 main.go:141] libmachine: (ha-025067-m03) Reserving static IP address...
	I0422 17:19:24.602035   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has current primary IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.602411   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find host DHCP lease matching {name: "ha-025067-m03", mac: "52:54:00:d5:51:30", ip: "192.168.39.220"} in network mk-ha-025067
	I0422 17:19:24.675429   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Getting to WaitForSSH function...
	I0422 17:19:24.675461   30338 main.go:141] libmachine: (ha-025067-m03) Reserved static IP address: 192.168.39.220
	I0422 17:19:24.675475   30338 main.go:141] libmachine: (ha-025067-m03) Waiting for SSH to be available...
	I0422 17:19:24.677939   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.678358   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:24.678394   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.678542   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Using SSH client type: external
	I0422 17:19:24.678569   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa (-rw-------)
	I0422 17:19:24.678599   30338 main.go:141] libmachine: (ha-025067-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 17:19:24.678712   30338 main.go:141] libmachine: (ha-025067-m03) DBG | About to run SSH command:
	I0422 17:19:24.678730   30338 main.go:141] libmachine: (ha-025067-m03) DBG | exit 0
	I0422 17:19:24.803345   30338 main.go:141] libmachine: (ha-025067-m03) DBG | SSH cmd err, output: <nil>: 
	I0422 17:19:24.803636   30338 main.go:141] libmachine: (ha-025067-m03) KVM machine creation complete!
	I0422 17:19:24.804017   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetConfigRaw
	I0422 17:19:24.804550   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:24.804756   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:24.804913   30338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 17:19:24.804928   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:19:24.806319   30338 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 17:19:24.806334   30338 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 17:19:24.806340   30338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 17:19:24.806345   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:24.808586   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.808971   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:24.808997   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.809143   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:24.809315   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.809463   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.809569   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:24.809744   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:24.809965   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:24.809976   30338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 17:19:24.918733   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:19:24.918758   30338 main.go:141] libmachine: Detecting the provisioner...
	I0422 17:19:24.918770   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:24.921489   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.921876   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:24.921900   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.922070   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:24.922245   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.922432   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.922565   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:24.922722   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:24.922879   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:24.922891   30338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 17:19:25.028496   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 17:19:25.028556   30338 main.go:141] libmachine: found compatible host: buildroot
	I0422 17:19:25.028565   30338 main.go:141] libmachine: Provisioning with buildroot...
	I0422 17:19:25.028575   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:25.028914   30338 buildroot.go:166] provisioning hostname "ha-025067-m03"
	I0422 17:19:25.028945   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:25.029218   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.032170   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.032603   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.032634   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.032869   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.033034   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.033296   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.033491   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.033677   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:25.033861   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:25.033877   30338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067-m03 && echo "ha-025067-m03" | sudo tee /etc/hostname
	I0422 17:19:25.162873   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067-m03
	
	I0422 17:19:25.162902   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.165681   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.166088   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.166115   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.166350   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.166515   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.166719   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.166863   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.167012   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:25.167263   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:25.167281   30338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:19:25.285404   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:19:25.285436   30338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:19:25.285457   30338 buildroot.go:174] setting up certificates
	I0422 17:19:25.285476   30338 provision.go:84] configureAuth start
	I0422 17:19:25.285493   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:25.285752   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:25.288807   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.289257   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.289288   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.289456   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.291665   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.292124   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.292152   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.292306   30338 provision.go:143] copyHostCerts
	I0422 17:19:25.292341   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:19:25.292381   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:19:25.292403   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:19:25.292466   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:19:25.292541   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:19:25.292558   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:19:25.292565   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:19:25.292587   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:19:25.292629   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:19:25.292645   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:19:25.292652   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:19:25.292671   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:19:25.292718   30338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067-m03 san=[127.0.0.1 192.168.39.220 ha-025067-m03 localhost minikube]
	I0422 17:19:25.497634   30338 provision.go:177] copyRemoteCerts
	I0422 17:19:25.497698   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:19:25.497719   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.500463   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.500806   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.500841   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.501023   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.501276   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.501474   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.501632   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:25.586916   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:19:25.586991   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 17:19:25.612978   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:19:25.613052   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:19:25.639265   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:19:25.639366   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 17:19:25.665128   30338 provision.go:87] duration metric: took 379.636943ms to configureAuth
	I0422 17:19:25.665156   30338 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:19:25.665381   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:25.665462   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.668354   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.668759   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.668787   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.668967   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.669179   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.669372   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.669526   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.669709   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:25.669861   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:25.669877   30338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:19:25.964438   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:19:25.964471   30338 main.go:141] libmachine: Checking connection to Docker...
	I0422 17:19:25.964482   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetURL
	I0422 17:19:25.965870   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Using libvirt version 6000000
	I0422 17:19:25.968178   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.968500   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.968542   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.968770   30338 main.go:141] libmachine: Docker is up and running!
	I0422 17:19:25.968785   30338 main.go:141] libmachine: Reticulating splines...
	I0422 17:19:25.968792   30338 client.go:171] duration metric: took 25.93451s to LocalClient.Create
	I0422 17:19:25.968818   30338 start.go:167] duration metric: took 25.934601441s to libmachine.API.Create "ha-025067"
	I0422 17:19:25.968830   30338 start.go:293] postStartSetup for "ha-025067-m03" (driver="kvm2")
	I0422 17:19:25.968844   30338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:19:25.968865   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:25.969114   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:19:25.969137   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.971550   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.971990   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.972007   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.972216   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.972410   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.972559   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.972709   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:26.058474   30338 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:19:26.063482   30338 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:19:26.063510   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:19:26.063588   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:19:26.063682   30338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:19:26.063694   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:19:26.063815   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:19:26.074247   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:19:26.101563   30338 start.go:296] duration metric: took 132.698316ms for postStartSetup
	I0422 17:19:26.101614   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetConfigRaw
	I0422 17:19:26.102182   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:26.105117   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.105507   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.105540   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.105854   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:19:26.106115   30338 start.go:128] duration metric: took 26.090482271s to createHost
	I0422 17:19:26.106145   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:26.108308   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.108669   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.108693   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.108903   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:26.109091   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.109263   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.109441   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:26.109610   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:26.109766   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:26.109776   30338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:19:26.212431   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806366.186296116
	
	I0422 17:19:26.212458   30338 fix.go:216] guest clock: 1713806366.186296116
	I0422 17:19:26.212467   30338 fix.go:229] Guest: 2024-04-22 17:19:26.186296116 +0000 UTC Remote: 2024-04-22 17:19:26.106130991 +0000 UTC m=+153.613398839 (delta=80.165125ms)
	I0422 17:19:26.212481   30338 fix.go:200] guest clock delta is within tolerance: 80.165125ms
	I0422 17:19:26.212485   30338 start.go:83] releasing machines lock for "ha-025067-m03", held for 26.196987955s
	I0422 17:19:26.212501   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.212814   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:26.215926   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.216275   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.216299   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.218736   30338 out.go:177] * Found network options:
	I0422 17:19:26.220289   30338 out.go:177]   - NO_PROXY=192.168.39.22,192.168.39.56
	W0422 17:19:26.221805   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 17:19:26.221830   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:19:26.221851   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.222469   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.222671   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.222777   30338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:19:26.222811   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	W0422 17:19:26.222917   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 17:19:26.222942   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:19:26.223010   30338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:19:26.223035   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:26.225776   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226051   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226106   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.226145   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226316   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:26.226500   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.226586   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.226610   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226678   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:26.226830   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:26.226908   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:26.227035   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.227200   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:26.227362   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:26.464942   30338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:19:26.472422   30338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:19:26.472501   30338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:19:26.491058   30338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 17:19:26.491084   30338 start.go:494] detecting cgroup driver to use...
	I0422 17:19:26.491170   30338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:19:26.509584   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:19:26.526690   30338 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:19:26.526748   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:19:26.543143   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:19:26.558862   30338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:19:26.686214   30338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:19:26.826319   30338 docker.go:233] disabling docker service ...
	I0422 17:19:26.826418   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:19:26.844632   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:19:26.859567   30338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:19:26.996620   30338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:19:27.123443   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:19:27.139044   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:19:27.159963   30338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:19:27.160017   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.171331   30338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:19:27.171402   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.183307   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.195182   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.207767   30338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:19:27.220048   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.232143   30338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.251630   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.262786   30338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:19:27.273390   30338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 17:19:27.273448   30338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 17:19:27.287468   30338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:19:27.297408   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:19:27.411513   30338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:19:27.558913   30338 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:19:27.558988   30338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:19:27.564023   30338 start.go:562] Will wait 60s for crictl version
	I0422 17:19:27.564072   30338 ssh_runner.go:195] Run: which crictl
	I0422 17:19:27.568132   30338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:19:27.607546   30338 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:19:27.607635   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:19:27.636210   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:19:27.669693   30338 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:19:27.671231   30338 out.go:177]   - env NO_PROXY=192.168.39.22
	I0422 17:19:27.672698   30338 out.go:177]   - env NO_PROXY=192.168.39.22,192.168.39.56
	I0422 17:19:27.673944   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:27.676893   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:27.677358   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:27.677378   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:27.677614   30338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:19:27.682091   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:19:27.695805   30338 mustload.go:65] Loading cluster: ha-025067
	I0422 17:19:27.696020   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:27.696262   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:27.696297   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:27.710954   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0422 17:19:27.711421   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:27.711967   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:27.711994   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:27.712305   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:27.712501   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:19:27.714037   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:19:27.714312   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:27.714356   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:27.730385   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0422 17:19:27.730803   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:27.731269   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:27.731292   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:27.731556   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:27.731728   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:19:27.731925   30338 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.220
	I0422 17:19:27.731938   30338 certs.go:194] generating shared ca certs ...
	I0422 17:19:27.731951   30338 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:19:27.732064   30338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:19:27.732100   30338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:19:27.732109   30338 certs.go:256] generating profile certs ...
	I0422 17:19:27.732172   30338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:19:27.732202   30338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b
	I0422 17:19:27.732215   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.56 192.168.39.220 192.168.39.254]
	I0422 17:19:27.884238   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b ...
	I0422 17:19:27.884271   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b: {Name:mkf8a1a5c9798bf319c88d21c1edd7b4d37d492a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:19:27.884442   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b ...
	I0422 17:19:27.884455   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b: {Name:mkbc4ef4912eb3022a46d9eb81eca9c84bc0f030 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:19:27.884522   30338 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:19:27.884645   30338 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:19:27.884764   30338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:19:27.884780   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:19:27.884792   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:19:27.884806   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:19:27.884818   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:19:27.884831   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:19:27.884846   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:19:27.884860   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:19:27.884871   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:19:27.884917   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:19:27.884943   30338 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:19:27.884953   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:19:27.884977   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:19:27.884997   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:19:27.885018   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:19:27.885055   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:19:27.885079   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:19:27.885093   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:27.885105   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:19:27.885142   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:19:27.888593   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:27.889027   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:19:27.889061   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:27.889210   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:19:27.889475   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:19:27.889649   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:19:27.889877   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:19:27.967607   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 17:19:27.977637   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 17:19:27.990775   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 17:19:27.995655   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 17:19:28.008873   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 17:19:28.013917   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 17:19:28.027340   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 17:19:28.032136   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 17:19:28.048584   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 17:19:28.054035   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 17:19:28.067212   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 17:19:28.072616   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0422 17:19:28.085764   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:19:28.114423   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:19:28.140780   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:19:28.167423   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:19:28.193709   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0422 17:19:28.220501   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 17:19:28.247527   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:19:28.273706   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:19:28.300216   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:19:28.327833   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:19:28.354462   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:19:28.379883   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 17:19:28.397684   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 17:19:28.415952   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 17:19:28.433985   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 17:19:28.452588   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 17:19:28.470942   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0422 17:19:28.489473   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 17:19:28.506839   30338 ssh_runner.go:195] Run: openssl version
	I0422 17:19:28.512969   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:19:28.524445   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:28.529289   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:28.529355   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:28.535616   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:19:28.548283   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:19:28.560303   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:19:28.565142   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:19:28.565203   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:19:28.571467   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:19:28.584022   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:19:28.596018   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:19:28.600700   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:19:28.600757   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:19:28.607201   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:19:28.619523   30338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:19:28.623761   30338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 17:19:28.623816   30338 kubeadm.go:928] updating node {m03 192.168.39.220 8443 v1.30.0 crio true true} ...
	I0422 17:19:28.623953   30338 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:19:28.623980   30338 kube-vip.go:111] generating kube-vip config ...
	I0422 17:19:28.624011   30338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:19:28.642523   30338 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:19:28.642584   30338 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:19:28.642637   30338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:19:28.653833   30338 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 17:19:28.653900   30338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 17:19:28.664803   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 17:19:28.664821   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0422 17:19:28.664840   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:19:28.664851   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:19:28.664914   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:19:28.664915   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:19:28.664803   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0422 17:19:28.665030   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:19:28.681947   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:19:28.681991   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 17:19:28.682024   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 17:19:28.682048   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 17:19:28.682069   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:19:28.682085   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 17:19:28.706934   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 17:19:28.706984   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 17:19:29.663184   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 17:19:29.672878   30338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0422 17:19:29.690607   30338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:19:29.709068   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 17:19:29.727623   30338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:19:29.732629   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:19:29.746738   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:19:29.872016   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:19:29.893549   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:19:29.894001   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:29.894057   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:29.910553   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0422 17:19:29.911074   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:29.911602   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:29.911625   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:29.911973   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:29.912143   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:19:29.912287   30338 start.go:316] joinCluster: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:19:29.912394   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 17:19:29.912409   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:19:29.915475   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:29.915931   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:19:29.915953   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:29.916128   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:19:29.916319   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:19:29.916483   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:19:29.916652   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:19:30.092391   30338 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:19:30.092442   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ziympr.v422sc69tns5sjqw --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0422 17:19:55.530146   30338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ziympr.v422sc69tns5sjqw --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (25.437672291s)
	I0422 17:19:55.530187   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 17:19:56.137720   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-025067-m03 minikube.k8s.io/updated_at=2024_04_22T17_19_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=ha-025067 minikube.k8s.io/primary=false
	I0422 17:19:56.274126   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-025067-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 17:19:56.386379   30338 start.go:318] duration metric: took 26.474086213s to joinCluster
	I0422 17:19:56.386462   30338 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:19:56.388355   30338 out.go:177] * Verifying Kubernetes components...
	I0422 17:19:56.386850   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:56.389912   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:19:56.626564   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:19:56.706969   30338 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:19:56.707279   30338 kapi.go:59] client config for ha-025067: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 17:19:56.707347   30338 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.22:8443
	I0422 17:19:56.707510   30338 node_ready.go:35] waiting up to 6m0s for node "ha-025067-m03" to be "Ready" ...
	I0422 17:19:56.707573   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:56.707580   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:56.707588   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:56.707595   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:56.711622   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:57.208591   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:57.208614   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:57.208622   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:57.208626   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:57.213222   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:57.707933   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:57.707955   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:57.707963   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:57.707967   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:57.711533   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:19:58.208333   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:58.208356   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:58.208364   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:58.208369   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:58.212414   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:58.708556   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:58.708585   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:58.708593   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:58.708599   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:58.712758   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:58.713433   30338 node_ready.go:53] node "ha-025067-m03" has status "Ready":"False"
	I0422 17:19:59.208425   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:59.208448   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:59.208456   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:59.208460   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:59.212570   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:59.708399   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:59.708419   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:59.708426   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:59.708430   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:59.712589   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:00.208371   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:00.208394   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:00.208401   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:00.208406   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:00.212570   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:00.708399   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:00.708423   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:00.708433   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:00.708453   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:00.714064   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:00.715541   30338 node_ready.go:53] node "ha-025067-m03" has status "Ready":"False"
	I0422 17:20:01.208459   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:01.208482   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:01.208490   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:01.208493   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:01.212806   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:01.707796   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:01.707823   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:01.707835   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:01.707841   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:01.712135   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:02.208390   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:02.208412   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:02.208420   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:02.208424   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:02.212116   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:02.708431   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:02.708456   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:02.708465   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:02.708470   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:02.712114   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:03.208156   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:03.208179   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:03.208186   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:03.208190   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:03.211922   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:03.212519   30338 node_ready.go:53] node "ha-025067-m03" has status "Ready":"False"
	I0422 17:20:03.707878   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:03.707901   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:03.707908   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:03.707912   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:03.711494   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:04.208067   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:04.208092   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.208099   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.208103   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.211686   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:04.212498   30338 node_ready.go:49] node "ha-025067-m03" has status "Ready":"True"
	I0422 17:20:04.212517   30338 node_ready.go:38] duration metric: took 7.504994536s for node "ha-025067-m03" to be "Ready" ...
	I0422 17:20:04.212525   30338 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:20:04.212580   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:04.212589   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.212597   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.212600   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.219657   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:04.227271   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.227361   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nswqp
	I0422 17:20:04.227372   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.227379   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.227384   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.230634   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:04.231363   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:04.231378   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.231388   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.231395   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.234301   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.234925   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.234949   30338 pod_ready.go:81] duration metric: took 7.651097ms for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.234963   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.235028   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vrl4h
	I0422 17:20:04.235040   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.235050   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.235055   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.237846   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.238531   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:04.238550   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.238560   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.238565   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.241514   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.242223   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.242244   30338 pod_ready.go:81] duration metric: took 7.272849ms for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.242257   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.242322   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067
	I0422 17:20:04.242337   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.242347   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.242355   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.244701   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.245379   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:04.245397   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.245406   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.245411   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.247922   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.248378   30338 pod_ready.go:92] pod "etcd-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.248399   30338 pod_ready.go:81] duration metric: took 6.128387ms for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.248411   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.248466   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:20:04.248477   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.248486   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.248496   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.251437   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.252256   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:04.252271   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.252278   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.252284   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.260618   30338 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 17:20:04.261155   30338 pod_ready.go:92] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.261173   30338 pod_ready.go:81] duration metric: took 12.753655ms for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.261186   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.408581   30338 request.go:629] Waited for 147.316449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:04.408644   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:04.408653   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.408663   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.408671   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.412815   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:04.608462   30338 request.go:629] Waited for 195.048242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:04.608529   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:04.608537   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.608546   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.608555   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.613464   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:04.808436   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:04.808461   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.808469   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.808473   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.812589   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:05.009058   30338 request.go:629] Waited for 195.465329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.009129   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.009136   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.009147   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.009152   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.013015   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:05.262095   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:05.262121   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.262130   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.262136   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.267529   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:05.408725   30338 request.go:629] Waited for 140.334553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.408806   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.408812   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.408819   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.408828   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.412651   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:05.761631   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:05.761652   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.761659   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.761663   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.765941   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:05.809069   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.809095   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.809102   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.809106   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.812854   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:05.813500   30338 pod_ready.go:92] pod "etcd-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:05.813523   30338 pod_ready.go:81] duration metric: took 1.552329799s for pod "etcd-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:05.813547   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.008971   30338 request.go:629] Waited for 195.359368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067
	I0422 17:20:06.009049   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067
	I0422 17:20:06.009056   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.009064   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.009071   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.014402   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:06.208895   30338 request.go:629] Waited for 193.410481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:06.208964   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:06.208969   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.208976   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.208981   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.212765   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:06.213419   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:06.213436   30338 pod_ready.go:81] duration metric: took 399.882287ms for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.213447   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.408596   30338 request.go:629] Waited for 195.065355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m02
	I0422 17:20:06.408660   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m02
	I0422 17:20:06.408666   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.408676   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.408687   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.414791   30338 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 17:20:06.608283   30338 request.go:629] Waited for 192.222584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:06.608340   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:06.608346   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.608353   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.608362   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.611724   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:06.612358   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:06.612374   30338 pod_ready.go:81] duration metric: took 398.921569ms for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.612383   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.808569   30338 request.go:629] Waited for 196.119415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:06.808635   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:06.808640   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.808647   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.808652   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.812804   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:07.008863   30338 request.go:629] Waited for 195.374285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.008937   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.008945   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.008963   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.008990   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.013499   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:07.208457   30338 request.go:629] Waited for 95.340592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:07.208521   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:07.208526   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.208532   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.208537   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.212919   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:07.408233   30338 request.go:629] Waited for 194.383295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.408313   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.408321   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.408336   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.408346   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.411555   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:07.613411   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:07.613438   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.613449   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.613456   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.621109   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:07.808164   30338 request.go:629] Waited for 185.26956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.808255   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.808266   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.808277   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.808286   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.811932   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.113012   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:08.113034   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.113043   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.113047   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.116472   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.208791   30338 request.go:629] Waited for 91.272542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:08.208890   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:08.208898   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.208906   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.208913   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.212727   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.213327   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:08.213347   30338 pod_ready.go:81] duration metric: took 1.600957094s for pod "kube-apiserver-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.213383   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.408870   30338 request.go:629] Waited for 195.4052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067
	I0422 17:20:08.408968   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067
	I0422 17:20:08.408975   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.408982   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.408986   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.412980   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.609144   30338 request.go:629] Waited for 195.365293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:08.609205   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:08.609212   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.609226   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.609238   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.613235   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.614307   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:08.614325   30338 pod_ready.go:81] duration metric: took 400.930846ms for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.614333   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.808511   30338 request.go:629] Waited for 194.114176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:20:08.808610   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:20:08.808622   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.808630   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.808634   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.811957   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:09.009099   30338 request.go:629] Waited for 196.371859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.009187   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.009199   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.009209   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.009220   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.013088   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:09.013918   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:09.013935   30338 pod_ready.go:81] duration metric: took 399.595545ms for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.013944   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.208441   30338 request.go:629] Waited for 194.414374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m03
	I0422 17:20:09.208496   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m03
	I0422 17:20:09.208501   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.208509   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.208513   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.214076   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:09.408248   30338 request.go:629] Waited for 193.289304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:09.408321   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:09.408326   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.408332   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.408335   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.413024   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:09.413485   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:09.413503   30338 pod_ready.go:81] duration metric: took 399.553039ms for pod "kube-controller-manager-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.413516   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.608590   30338 request.go:629] Waited for 195.014295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:20:09.608670   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:20:09.608682   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.608695   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.608704   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.612912   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:09.808095   30338 request.go:629] Waited for 194.32254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.808159   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.808166   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.808173   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.808177   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.811542   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:09.812043   30338 pod_ready.go:92] pod "kube-proxy-dk5ww" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:09.812061   30338 pod_ready.go:81] duration metric: took 398.537697ms for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.812074   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.008615   30338 request.go:629] Waited for 196.476057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:20:10.008715   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:20:10.008726   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.008737   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.008744   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.013332   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:10.208359   30338 request.go:629] Waited for 193.179588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:10.208431   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:10.208442   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.208453   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.208462   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.216249   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:10.217026   30338 pod_ready.go:92] pod "kube-proxy-pf7cc" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:10.217047   30338 pod_ready.go:81] duration metric: took 404.966564ms for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.217055   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wsr9x" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.409006   30338 request.go:629] Waited for 191.869571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wsr9x
	I0422 17:20:10.409066   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wsr9x
	I0422 17:20:10.409071   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.409078   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.409085   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.412838   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:10.608857   30338 request.go:629] Waited for 195.390297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:10.608931   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:10.608943   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.608953   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.608960   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.612941   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:10.614350   30338 pod_ready.go:92] pod "kube-proxy-wsr9x" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:10.614367   30338 pod_ready.go:81] duration metric: took 397.302932ms for pod "kube-proxy-wsr9x" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.614376   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.808575   30338 request.go:629] Waited for 194.119598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:20:10.808658   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:20:10.808684   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.808695   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.808703   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.812493   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:11.008317   30338 request.go:629] Waited for 195.180211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:11.008418   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:11.008431   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.008442   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.008450   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.012464   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:11.014055   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:11.014072   30338 pod_ready.go:81] duration metric: took 399.690169ms for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.014095   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.208140   30338 request.go:629] Waited for 193.972024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:20:11.208203   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:20:11.208210   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.208220   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.208227   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.212964   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:11.408292   30338 request.go:629] Waited for 194.265102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:11.408362   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:11.408367   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.408374   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.408379   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.412023   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:11.413083   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:11.413098   30338 pod_ready.go:81] duration metric: took 398.996648ms for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.413112   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.608335   30338 request.go:629] Waited for 195.114356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m03
	I0422 17:20:11.608406   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m03
	I0422 17:20:11.608413   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.608424   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.608431   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.613255   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:11.808591   30338 request.go:629] Waited for 194.379878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:11.808643   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:11.808648   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.808656   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.808659   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.813031   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:11.813961   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:11.813980   30338 pod_ready.go:81] duration metric: took 400.860086ms for pod "kube-scheduler-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.813994   30338 pod_ready.go:38] duration metric: took 7.601459476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:20:11.814015   30338 api_server.go:52] waiting for apiserver process to appear ...
	I0422 17:20:11.814067   30338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:20:11.830960   30338 api_server.go:72] duration metric: took 15.444458246s to wait for apiserver process to appear ...
	I0422 17:20:11.830989   30338 api_server.go:88] waiting for apiserver healthz status ...
	I0422 17:20:11.831012   30338 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0422 17:20:11.835763   30338 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0422 17:20:11.835834   30338 round_trippers.go:463] GET https://192.168.39.22:8443/version
	I0422 17:20:11.835842   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.835854   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.835861   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.836962   30338 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 17:20:11.837099   30338 api_server.go:141] control plane version: v1.30.0
	I0422 17:20:11.837122   30338 api_server.go:131] duration metric: took 6.125261ms to wait for apiserver health ...
	I0422 17:20:11.837132   30338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 17:20:12.008533   30338 request.go:629] Waited for 171.326368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.008588   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.008593   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.008600   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.008605   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.016043   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:12.023997   30338 system_pods.go:59] 24 kube-system pods found
	I0422 17:20:12.024025   30338 system_pods.go:61] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:20:12.024030   30338 system_pods.go:61] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:20:12.024033   30338 system_pods.go:61] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:20:12.024043   30338 system_pods.go:61] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:20:12.024046   30338 system_pods.go:61] "etcd-ha-025067-m03" [991fbed5-cbd2-47f4-b6ed-6d5d8b90fc6f] Running
	I0422 17:20:12.024050   30338 system_pods.go:61] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:20:12.024057   30338 system_pods.go:61] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:20:12.024066   30338 system_pods.go:61] "kindnet-ztcgm" [8d90cd98-58d5-40bf-90fa-5098dd0ebed9] Running
	I0422 17:20:12.024084   30338 system_pods.go:61] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:20:12.024089   30338 system_pods.go:61] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:20:12.024095   30338 system_pods.go:61] "kube-apiserver-ha-025067-m03" [bb05295e-a36d-496c-ba52-427800a5e567] Running
	I0422 17:20:12.024104   30338 system_pods.go:61] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:20:12.024108   30338 system_pods.go:61] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:20:12.024115   30338 system_pods.go:61] "kube-controller-manager-ha-025067-m03" [122ddb06-24df-4fd0-b1fb-e9168ff5d3ba] Running
	I0422 17:20:12.024118   30338 system_pods.go:61] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:20:12.024121   30338 system_pods.go:61] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:20:12.024124   30338 system_pods.go:61] "kube-proxy-wsr9x" [fafeef7d-736f-4aa2-88a9-1a8ee00af204] Running
	I0422 17:20:12.024128   30338 system_pods.go:61] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:20:12.024133   30338 system_pods.go:61] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:20:12.024139   30338 system_pods.go:61] "kube-scheduler-ha-025067-m03" [1c9bea0c-edac-4cd7-85d9-cc9b23ced6f3] Running
	I0422 17:20:12.024142   30338 system_pods.go:61] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:20:12.024145   30338 system_pods.go:61] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:20:12.024148   30338 system_pods.go:61] "kube-vip-ha-025067-m03" [bf7d3c98-811f-450f-8764-76d0b87175bd] Running
	I0422 17:20:12.024154   30338 system_pods.go:61] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:20:12.024161   30338 system_pods.go:74] duration metric: took 187.022358ms to wait for pod list to return data ...
	I0422 17:20:12.024174   30338 default_sa.go:34] waiting for default service account to be created ...
	I0422 17:20:12.208594   30338 request.go:629] Waited for 184.345038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:20:12.208668   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:20:12.208673   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.208689   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.208699   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.211945   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:12.212074   30338 default_sa.go:45] found service account: "default"
	I0422 17:20:12.212090   30338 default_sa.go:55] duration metric: took 187.905867ms for default service account to be created ...
	I0422 17:20:12.212099   30338 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 17:20:12.408838   30338 request.go:629] Waited for 196.639234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.408919   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.408929   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.408939   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.408953   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.416098   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:12.424212   30338 system_pods.go:86] 24 kube-system pods found
	I0422 17:20:12.424244   30338 system_pods.go:89] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:20:12.424251   30338 system_pods.go:89] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:20:12.424258   30338 system_pods.go:89] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:20:12.424264   30338 system_pods.go:89] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:20:12.424270   30338 system_pods.go:89] "etcd-ha-025067-m03" [991fbed5-cbd2-47f4-b6ed-6d5d8b90fc6f] Running
	I0422 17:20:12.424276   30338 system_pods.go:89] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:20:12.424282   30338 system_pods.go:89] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:20:12.424288   30338 system_pods.go:89] "kindnet-ztcgm" [8d90cd98-58d5-40bf-90fa-5098dd0ebed9] Running
	I0422 17:20:12.424294   30338 system_pods.go:89] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:20:12.424300   30338 system_pods.go:89] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:20:12.424308   30338 system_pods.go:89] "kube-apiserver-ha-025067-m03" [bb05295e-a36d-496c-ba52-427800a5e567] Running
	I0422 17:20:12.424315   30338 system_pods.go:89] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:20:12.424325   30338 system_pods.go:89] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:20:12.424333   30338 system_pods.go:89] "kube-controller-manager-ha-025067-m03" [122ddb06-24df-4fd0-b1fb-e9168ff5d3ba] Running
	I0422 17:20:12.424341   30338 system_pods.go:89] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:20:12.424354   30338 system_pods.go:89] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:20:12.424360   30338 system_pods.go:89] "kube-proxy-wsr9x" [fafeef7d-736f-4aa2-88a9-1a8ee00af204] Running
	I0422 17:20:12.424367   30338 system_pods.go:89] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:20:12.424374   30338 system_pods.go:89] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:20:12.424384   30338 system_pods.go:89] "kube-scheduler-ha-025067-m03" [1c9bea0c-edac-4cd7-85d9-cc9b23ced6f3] Running
	I0422 17:20:12.424391   30338 system_pods.go:89] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:20:12.424402   30338 system_pods.go:89] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:20:12.424408   30338 system_pods.go:89] "kube-vip-ha-025067-m03" [bf7d3c98-811f-450f-8764-76d0b87175bd] Running
	I0422 17:20:12.424414   30338 system_pods.go:89] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:20:12.424426   30338 system_pods.go:126] duration metric: took 212.316904ms to wait for k8s-apps to be running ...
	I0422 17:20:12.424438   30338 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 17:20:12.424487   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:20:12.441136   30338 system_svc.go:56] duration metric: took 16.689409ms WaitForService to wait for kubelet
	I0422 17:20:12.441183   30338 kubeadm.go:576] duration metric: took 16.054683836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:20:12.441205   30338 node_conditions.go:102] verifying NodePressure condition ...
	I0422 17:20:12.608837   30338 request.go:629] Waited for 167.557346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes
	I0422 17:20:12.608887   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes
	I0422 17:20:12.608892   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.608900   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.608903   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.612754   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:12.613857   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:20:12.613878   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:20:12.613889   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:20:12.613892   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:20:12.613896   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:20:12.613899   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:20:12.613902   30338 node_conditions.go:105] duration metric: took 172.692667ms to run NodePressure ...
	I0422 17:20:12.613913   30338 start.go:240] waiting for startup goroutines ...
	I0422 17:20:12.613930   30338 start.go:254] writing updated cluster config ...
	I0422 17:20:12.614248   30338 ssh_runner.go:195] Run: rm -f paused
	I0422 17:20:12.664197   30338 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 17:20:12.666233   30338 out.go:177] * Done! kubectl is now configured to use "ha-025067" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.054916049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=580ff986-490d-46a7-a8f3-1ca881b75f20 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.056243949Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16c96d08-e6e9-441d-80ae-9c5923cbe04b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.056913385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806624056886907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16c96d08-e6e9-441d-80ae-9c5923cbe04b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.057540292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ed0baf0-3fd0-46e4-baab-2ddd6ec35172 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.057622306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ed0baf0-3fd0-46e4-baab-2ddd6ec35172 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.057871095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ed0baf0-3fd0-46e4-baab-2ddd6ec35172 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.064472507Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9fb18fa7-4a49-4610-b541-f692d7c74e42 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.064759829Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-l97ld,Uid:ca33d56c-e408-4501-9462-76c58f2b23dd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806413979593353,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:20:13.651837974Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1713806270632537558,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-22T17:17:50.306137173Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-nswqp,Uid:bedfb6c0-6553-4ec2-9318-d1997a2994e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806270331824356,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:17:49.996313841Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vrl4h,Uid:9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1713806270325297549,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:17:49.985124109Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&PodSandboxMetadata{Name:kube-proxy-pf7cc,Uid:4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806268077362416,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-04-22T17:17:47.727671217Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&PodSandboxMetadata{Name:kindnet-tmxd9,Uid:0d448df8-32a2-46e8-bcbf-fac5d147e45f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806268046162243,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:17:47.738599463Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-025067,Uid:23538072fbf30b79e739fab4230ece56,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1713806247821356433,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 23538072fbf30b79e739fab4230ece56,kubernetes.io/config.seen: 2024-04-22T17:17:27.320126939Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-025067,Uid:8dd89f0fa3e1221316981adeb7afd503,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806247819380624,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd50
3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8dd89f0fa3e1221316981adeb7afd503,kubernetes.io/config.seen: 2024-04-22T17:17:27.320125780Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-025067,Uid:dafca65b718398ce567dba12ba2494c7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806247815641219,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.22:8443,kubernetes.io/config.hash: dafca65b718398ce567dba12ba2494c7,kubernetes.io/config.seen: 2024-04-22T17:17:27.320124610Z,kubernetes.io/config.source: file,},RuntimeHandler:,}
,&PodSandbox{Id:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&PodSandboxMetadata{Name:etcd-ha-025067,Uid:29630f2b98931e48da1483cad97880d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806247810143519,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.22:2379,kubernetes.io/config.hash: 29630f2b98931e48da1483cad97880d6,kubernetes.io/config.seen: 2024-04-22T17:17:27.320123247Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-025067,Uid:a734ee4ab85ed101d0ef67cd65d88766,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713806247791961072,Labels:
map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{kubernetes.io/config.hash: a734ee4ab85ed101d0ef67cd65d88766,kubernetes.io/config.seen: 2024-04-22T17:17:27.320118767Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9fb18fa7-4a49-4610-b541-f692d7c74e42 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.065667330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ceebde7b-2253-4e69-864a-aeca349cc813 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.065743494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ceebde7b-2253-4e69-864a-aeca349cc813 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.065998655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ceebde7b-2253-4e69-864a-aeca349cc813 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.100736406Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d12e36b8-08ec-4d1c-9b1b-8e95de03b5b1 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.100840482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d12e36b8-08ec-4d1c-9b1b-8e95de03b5b1 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.102392475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f0877d41-8879-4fcd-a91a-a5c5d33dda3a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.102828867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806624102803105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0877d41-8879-4fcd-a91a-a5c5d33dda3a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.103633804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2322e5ab-549e-4327-95cb-02bec7d2f3fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.103707189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2322e5ab-549e-4327-95cb-02bec7d2f3fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.104441296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2322e5ab-549e-4327-95cb-02bec7d2f3fc name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.148441747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a8fd3fd-352f-44cf-810a-0efe2301d4fc name=/runtime.v1.RuntimeService/Version
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.148518611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a8fd3fd-352f-44cf-810a-0efe2301d4fc name=/runtime.v1.RuntimeService/Version
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.149677031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64835925-dd66-4b23-86be-9df2703f476e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.150469224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806624150433609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64835925-dd66-4b23-86be-9df2703f476e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.151087233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a61dd17-3e05-4c84-b163-169cec9abd8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.151165113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a61dd17-3e05-4c84-b163-169cec9abd8c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:23:44 ha-025067 crio[684]: time="2024-04-22 17:23:44.151393498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a61dd17-3e05-4c84-b163-169cec9abd8c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	983cb8537237f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   3c3abb6c214d4       busybox-fc5497c4f-l97ld
	d608f1d990199       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   9f37c522b34de       storage-provisioner
	c0af820e7bd06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   c2921baac16b3       coredns-7db6d8ff4d-vrl4h
	524e02d80347d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   b553b11bb990b       coredns-7db6d8ff4d-nswqp
	e792653200952       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Running             kindnet-cni               0                   20bb53838ad91       kindnet-tmxd9
	f841dcb8dd09b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                0                   052596614cf9c       kube-proxy-pf7cc
	ce4c01cd6ca70       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   d34272323a0f8       kube-vip-ha-025067
	b3d751e3e8f50       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   3c34eb37cd442       etcd-ha-025067
	549930f1d83f6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      6 minutes ago       Running             kube-scheduler            0                   c0ff0dbc27bbd       kube-scheduler-ha-025067
	819e895185838       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      6 minutes ago       Running             kube-controller-manager   0                   a499e1bb77c00       kube-controller-manager-ha-025067
	9bc987b1519c5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      6 minutes ago       Running             kube-apiserver            0                   8c39dcc79583c       kube-apiserver-ha-025067
	
	
	==> coredns [524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0] <==
	[INFO] 10.244.0.4:52803 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122922s
	[INFO] 10.244.0.4:45587 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164214s
	[INFO] 10.244.0.4:36350 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134111s
	[INFO] 10.244.1.2:56300 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001818553s
	[INFO] 10.244.1.2:58403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100106s
	[INFO] 10.244.1.2:49747 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094083s
	[INFO] 10.244.1.2:39851 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094869s
	[INFO] 10.244.1.2:51921 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132016s
	[INFO] 10.244.2.2:46485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151891s
	[INFO] 10.244.2.2:52343 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183731s
	[INFO] 10.244.2.2:36982 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162215s
	[INFO] 10.244.2.2:56193 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471319s
	[INFO] 10.244.2.2:48503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072359s
	[INFO] 10.244.2.2:35429 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006794s
	[INFO] 10.244.2.2:56484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092002s
	[INFO] 10.244.0.4:39516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189987s
	[INFO] 10.244.0.4:60228 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082728s
	[INFO] 10.244.1.2:44703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203159s
	[INFO] 10.244.1.2:33524 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167155s
	[INFO] 10.244.1.2:43201 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098618s
	[INFO] 10.244.2.2:53563 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215578s
	[INFO] 10.244.2.2:54616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163304s
	[INFO] 10.244.0.4:49280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092142s
	[INFO] 10.244.1.2:40544 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116574s
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249064s
	
	
	==> coredns [c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55] <==
	[INFO] 10.244.1.2:60175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001836146s
	[INFO] 10.244.2.2:52744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012764s
	[INFO] 10.244.2.2:37678 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001635715s
	[INFO] 10.244.0.4:33703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230709s
	[INFO] 10.244.0.4:60463 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000233694s
	[INFO] 10.244.0.4:44231 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.015736347s
	[INFO] 10.244.0.4:37322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115326s
	[INFO] 10.244.1.2:58538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135694s
	[INFO] 10.244.1.2:51828 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153493s
	[INFO] 10.244.1.2:44556 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001447535s
	[INFO] 10.244.2.2:44901 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139485s
	[INFO] 10.244.0.4:42667 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108865s
	[INFO] 10.244.0.4:54399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073213s
	[INFO] 10.244.1.2:35127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090826s
	[INFO] 10.244.2.2:52722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185046s
	[INFO] 10.244.2.2:49596 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128238s
	[INFO] 10.244.0.4:59309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125541s
	[INFO] 10.244.0.4:42344 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215786s
	[INFO] 10.244.0.4:34084 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000295612s
	[INFO] 10.244.1.2:50561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016924s
	[INFO] 10.244.1.2:40185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080135s
	[INFO] 10.244.1.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083107s
	[INFO] 10.244.2.2:52310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147992s
	[INFO] 10.244.2.2:48499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103149s
	[INFO] 10.244.2.2:60500 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018474s
	
	
	==> describe nodes <==
	Name:               ha-025067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:17:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:23:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-025067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73a664449fd9403194a5919e23b0871b
	  System UUID:                73a66444-9fd9-4031-94a5-919e23b0871b
	  Boot ID:                    4c2ace2e-318b-4b8f-bd1e-a5f6d5151f88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l97ld              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 coredns-7db6d8ff4d-nswqp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m57s
	  kube-system                 coredns-7db6d8ff4d-vrl4h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m57s
	  kube-system                 etcd-ha-025067                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m10s
	  kube-system                 kindnet-tmxd9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-apiserver-ha-025067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-controller-manager-ha-025067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-proxy-pf7cc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-scheduler-ha-025067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-vip-ha-025067                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m55s                  kube-proxy       
	  Normal  Starting                 6m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m10s (x2 over 6m10s)  kubelet          Node ha-025067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x2 over 6m10s)  kubelet          Node ha-025067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x2 over 6m10s)  kubelet          Node ha-025067 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m57s                  node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal  NodeReady                5m55s                  kubelet          Node ha-025067 status is now: NodeReady
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal  RegisteredNode           3m33s                  node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	
	
	Name:               ha-025067-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_18_41_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:18:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:21:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-025067-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1f034a156f4a3fb9cb79780785386e
	  System UUID:                8a1f034a-156f-4a3f-b9cb-79780785386e
	  Boot ID:                    f3fb9e45-42b6-4f46-ad83-f76ee2a3cbe3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m6qxt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-025067-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m4s
	  kube-system                 kindnet-ctdzp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m6s
	  kube-system                 kube-apiserver-ha-025067-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-controller-manager-ha-025067-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-proxy-dk5ww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	  kube-system                 kube-scheduler-ha-025067-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-vip-ha-025067-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node ha-025067-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x7 over 5m6s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           4m48s                node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           3m33s                node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  NodeNotReady             98s                  node-controller  Node ha-025067-m02 status is now: NodeNotReady
	
	
	Name:               ha-025067-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_19_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:19:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:23:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:19:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:19:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:19:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:20:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-025067-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 300afc7a045c4fd490327eb7452e4f8c
	  System UUID:                300afc7a-045c-4fd4-9032-7eb7452e4f8c
	  Boot ID:                    d51c7e9b-22eb-41ed-8a76-3c0480ae4c87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvcmk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-025067-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-ztcgm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m51s
	  kube-system                 kube-apiserver-ha-025067-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ha-025067-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-wsr9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-scheduler-ha-025067-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-vip-ha-025067-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m46s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m51s (x8 over 3m52s)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s (x8 over 3m52s)  kubelet          Node ha-025067-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m51s (x7 over 3m52s)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal  RegisteredNode           3m33s                  node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	
	
	Name:               ha-025067-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_20_51_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:20:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:20:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:20:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:20:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:21:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-025067-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfe8f8092cda4851adcca8410e5437c9
	  System UUID:                bfe8f809-2cda-4851-adcc-a8410e5437c9
	  Boot ID:                    9233437f-4ac9-4a5c-8bc3-15be3e575746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d6tpm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m53s
	  kube-system                 kube-proxy-kbhbk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m47s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m54s (x2 over 2m54s)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m54s (x2 over 2m54s)  kubelet          Node ha-025067-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m54s (x2 over 2m54s)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal  RegisteredNode           2m53s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal  NodeAllocatableEnforced  2m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-025067-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr22 17:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040230] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr22 17:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.880735] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.629211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.717166] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.065948] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064117] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195469] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.121015] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285063] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.448511] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059431] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.181808] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.968933] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.287359] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.083571] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.934890] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 17:18] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f] <==
	{"level":"warn","ts":"2024-04-22T17:23:44.239335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.441585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.45484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.458865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.471844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.478855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.486735Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.492236Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.495463Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.503441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.511337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.518755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.523289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.527163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.529237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.539458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.548205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.555309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.560739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.565587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.571994Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.578408Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.584469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.62914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:23:44.629505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:23:44 up 6 min,  0 users,  load average: 0.14, 0.13, 0.05
	Linux ha-025067 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9] <==
	I0422 17:23:09.942687       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:23:19.951360       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:23:19.951414       1 main.go:227] handling current node
	I0422 17:23:19.951440       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:23:19.951450       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:23:19.951597       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:23:19.951633       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:23:19.951717       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:23:19.951752       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:23:29.958162       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:23:29.958207       1 main.go:227] handling current node
	I0422 17:23:29.958219       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:23:29.958225       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:23:29.958366       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:23:29.958731       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:23:29.958896       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:23:29.958935       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:23:39.972932       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:23:39.973161       1 main.go:227] handling current node
	I0422 17:23:39.973204       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:23:39.973234       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:23:39.973404       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:23:39.973445       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:23:39.973567       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:23:39.973598       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b] <==
	E0422 17:17:34.274861       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"client disconnected"}: client disconnected
	E0422 17:17:34.274994       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0422 17:17:34.276119       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0422 17:17:34.276157       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0422 17:17:34.277898       1 timeout.go:142] post-timeout activity - time-elapsed: 2.971164ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0422 17:17:34.322005       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 17:17:34.343471       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0422 17:17:34.493863       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 17:17:47.670486       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0422 17:17:47.766389       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0422 17:20:17.847358       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57020: use of closed network connection
	E0422 17:20:18.052687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57034: use of closed network connection
	E0422 17:20:18.268726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57052: use of closed network connection
	E0422 17:20:18.484851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57078: use of closed network connection
	E0422 17:20:18.702901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57092: use of closed network connection
	E0422 17:20:18.897172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57098: use of closed network connection
	E0422 17:20:19.097803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57116: use of closed network connection
	E0422 17:20:19.305950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57130: use of closed network connection
	E0422 17:20:19.498927       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57146: use of closed network connection
	E0422 17:20:19.836096       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57174: use of closed network connection
	E0422 17:20:20.040429       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57194: use of closed network connection
	E0422 17:20:20.265570       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57198: use of closed network connection
	E0422 17:20:20.450583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57218: use of closed network connection
	E0422 17:20:20.848329       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57248: use of closed network connection
	W0422 17:21:32.941804       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.220]
	
	
	==> kube-controller-manager [819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158] <==
	I0422 17:18:42.675807       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m02"
	I0422 17:19:52.993162       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-025067-m03\" does not exist"
	I0422 17:19:53.024303       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-025067-m03" podCIDRs=["10.244.2.0/24"]
	I0422 17:19:57.730824       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m03"
	I0422 17:20:13.672920       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.629553ms"
	I0422 17:20:13.845423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.979607ms"
	I0422 17:20:14.133932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="288.190671ms"
	E0422 17:20:14.134006       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0422 17:20:14.199360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.214623ms"
	I0422 17:20:14.199589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.252µs"
	I0422 17:20:14.479901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.946µs"
	I0422 17:20:17.125741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.752001ms"
	I0422 17:20:17.125869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.335µs"
	I0422 17:20:17.199312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.549855ms"
	I0422 17:20:17.199449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.695µs"
	I0422 17:20:17.275434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.800596ms"
	I0422 17:20:17.293193       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.363µs"
	E0422 17:20:50.894840       1 certificate_controller.go:146] Sync csr-phw8q failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-phw8q": the object has been modified; please apply your changes to the latest version and try again
	I0422 17:20:51.171537       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-025067-m04\" does not exist"
	I0422 17:20:51.196798       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-025067-m04" podCIDRs=["10.244.3.0/24"]
	I0422 17:20:52.758431       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m04"
	I0422 17:21:02.204529       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-025067-m04"
	I0422 17:22:06.310467       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-025067-m04"
	I0422 17:22:06.409315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.463987ms"
	I0422 17:22:06.409587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.041µs"
	
	
	==> kube-proxy [f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72] <==
	I0422 17:17:48.726691       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:17:48.757347       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	I0422 17:17:48.861680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:17:48.861739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:17:48.861755       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:17:48.864675       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:17:48.865106       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:17:48.865139       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:17:48.866289       1 config.go:192] "Starting service config controller"
	I0422 17:17:48.866321       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:17:48.866341       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:17:48.866345       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:17:48.868358       1 config.go:319] "Starting node config controller"
	I0422 17:17:48.868391       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:17:48.968146       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:17:48.968207       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:17:48.969297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89] <==
	W0422 17:17:31.189846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:17:31.189993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:17:31.189754       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 17:17:31.190224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 17:17:32.005472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:17:32.005606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:17:32.048534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 17:17:32.048691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 17:17:32.103231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 17:17:32.103388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 17:17:32.162682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:17:32.162810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 17:17:32.328588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:17:32.328736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:17:32.398176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:17:32.398208       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 17:17:32.531181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:17:32.531303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:17:32.744247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:17:32.744909       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 17:17:34.780938       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 17:20:51.292468       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fjzpp\": pod kindnet-fjzpp is already assigned to node \"ha-025067-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fjzpp" node="ha-025067-m04"
	E0422 17:20:51.292673       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 528898c4-830d-4367-9bc3-59f41121702e(kube-system/kindnet-fjzpp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fjzpp"
	E0422 17:20:51.292706       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fjzpp\": pod kindnet-fjzpp is already assigned to node \"ha-025067-m04\"" pod="kube-system/kindnet-fjzpp"
	I0422 17:20:51.292734       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fjzpp" node="ha-025067-m04"
	
	
	==> kubelet <==
	Apr 22 17:19:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:19:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:19:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:20:13 ha-025067 kubelet[1368]: I0422 17:20:13.652149    1368 topology_manager.go:215] "Topology Admit Handler" podUID="ca33d56c-e408-4501-9462-76c58f2b23dd" podNamespace="default" podName="busybox-fc5497c4f-l97ld"
	Apr 22 17:20:13 ha-025067 kubelet[1368]: I0422 17:20:13.746569    1368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55r6\" (UniqueName: \"kubernetes.io/projected/ca33d56c-e408-4501-9462-76c58f2b23dd-kube-api-access-v55r6\") pod \"busybox-fc5497c4f-l97ld\" (UID: \"ca33d56c-e408-4501-9462-76c58f2b23dd\") " pod="default/busybox-fc5497c4f-l97ld"
	Apr 22 17:20:34 ha-025067 kubelet[1368]: E0422 17:20:34.512670    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:20:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:20:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:20:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:20:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:21:34 ha-025067 kubelet[1368]: E0422 17:21:34.511429    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:21:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:21:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:21:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:21:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:22:34 ha-025067 kubelet[1368]: E0422 17:22:34.512373    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:22:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:22:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:22:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:22:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:23:34 ha-025067 kubelet[1368]: E0422 17:23:34.512577    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:23:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:23:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:23:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:23:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-025067 -n ha-025067
helpers_test.go:261: (dbg) Run:  kubectl --context ha-025067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (3.174029421s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:23:49.216760   35580 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:23:49.217023   35580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:23:49.217034   35580 out.go:304] Setting ErrFile to fd 2...
	I0422 17:23:49.217041   35580 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:23:49.217228   35580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:23:49.217412   35580 out.go:298] Setting JSON to false
	I0422 17:23:49.217444   35580 mustload.go:65] Loading cluster: ha-025067
	I0422 17:23:49.217548   35580 notify.go:220] Checking for updates...
	I0422 17:23:49.217942   35580 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:23:49.217961   35580 status.go:255] checking status of ha-025067 ...
	I0422 17:23:49.218458   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:49.218522   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:49.236264   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0422 17:23:49.236847   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:49.237523   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:49.237555   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:49.238066   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:49.238300   35580 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:23:49.240203   35580 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:23:49.240231   35580 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:23:49.240662   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:49.240713   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:49.255633   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0422 17:23:49.256086   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:49.256581   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:49.256600   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:49.256902   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:49.257131   35580 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:23:49.260392   35580 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:49.260785   35580 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:23:49.260817   35580 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:49.261002   35580 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:23:49.261289   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:49.261326   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:49.277112   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41023
	I0422 17:23:49.277603   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:49.278150   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:49.278174   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:49.278516   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:49.278723   35580 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:23:49.278956   35580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:49.278977   35580 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:23:49.282089   35580 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:49.282543   35580 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:23:49.282568   35580 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:49.282758   35580 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:23:49.282927   35580 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:23:49.283092   35580 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:23:49.283244   35580 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:23:49.368792   35580 ssh_runner.go:195] Run: systemctl --version
	I0422 17:23:49.375093   35580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:49.392993   35580 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:23:49.393024   35580 api_server.go:166] Checking apiserver status ...
	I0422 17:23:49.393073   35580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:23:49.409443   35580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:23:49.420350   35580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:23:49.420410   35580 ssh_runner.go:195] Run: ls
	I0422 17:23:49.426097   35580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:23:49.432716   35580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:23:49.432748   35580 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:23:49.432761   35580 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:23:49.432777   35580 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:23:49.433144   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:49.433183   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:49.448886   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44741
	I0422 17:23:49.449344   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:49.449848   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:49.449871   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:49.450181   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:49.450460   35580 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:23:49.452248   35580 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:23:49.452267   35580 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:23:49.452580   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:49.452613   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:49.468072   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I0422 17:23:49.468447   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:49.468925   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:49.468951   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:49.469245   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:49.469459   35580 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:23:49.472704   35580 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:49.473172   35580 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:23:49.473195   35580 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:49.473347   35580 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:23:49.473698   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:49.473741   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:49.488401   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36611
	I0422 17:23:49.488813   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:49.489285   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:49.489310   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:49.489627   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:49.489849   35580 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:23:49.490044   35580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:49.490063   35580 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:23:49.493083   35580 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:49.493635   35580 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:23:49.493663   35580 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:49.493774   35580 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:23:49.494176   35580 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:23:49.494365   35580 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:23:49.494507   35580 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:23:51.991428   35580 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:23:51.991542   35580 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:23:51.991576   35580 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:51.991584   35580 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:23:51.991600   35580 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:51.991609   35580 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:23:51.991927   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:51.991991   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:52.007175   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33263
	I0422 17:23:52.007576   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:52.008021   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:52.008042   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:52.008326   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:52.008495   35580 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:23:52.010071   35580 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:23:52.010088   35580 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:23:52.010404   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:52.010442   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:52.025850   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
	I0422 17:23:52.026222   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:52.026707   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:52.026728   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:52.027005   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:52.027195   35580 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:23:52.029671   35580 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:52.030064   35580 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:23:52.030089   35580 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:52.030229   35580 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:23:52.030514   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:52.030550   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:52.045185   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45879
	I0422 17:23:52.045574   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:52.046021   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:52.046049   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:52.046290   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:52.046467   35580 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:23:52.046619   35580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:52.046634   35580 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:23:52.049097   35580 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:52.049476   35580 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:23:52.049501   35580 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:52.049649   35580 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:23:52.049799   35580 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:23:52.049942   35580 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:23:52.050067   35580 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:23:52.131735   35580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:52.147420   35580 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:23:52.147453   35580 api_server.go:166] Checking apiserver status ...
	I0422 17:23:52.147511   35580 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:23:52.161657   35580 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:23:52.172115   35580 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:23:52.172207   35580 ssh_runner.go:195] Run: ls
	I0422 17:23:52.177000   35580 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:23:52.181363   35580 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:23:52.181384   35580 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:23:52.181392   35580 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:23:52.181407   35580 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:23:52.181731   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:52.181765   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:52.196634   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44561
	I0422 17:23:52.197128   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:52.197683   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:52.197704   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:52.198028   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:52.198194   35580 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:23:52.199847   35580 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:23:52.199867   35580 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:23:52.200154   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:52.200185   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:52.215028   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33483
	I0422 17:23:52.215483   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:52.216021   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:52.216043   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:52.216357   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:52.216546   35580 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:23:52.219164   35580 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:52.219501   35580 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:23:52.219533   35580 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:52.219633   35580 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:23:52.219924   35580 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:52.219957   35580 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:52.234465   35580 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46371
	I0422 17:23:52.234830   35580 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:52.235316   35580 main.go:141] libmachine: Using API Version  1
	I0422 17:23:52.235344   35580 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:52.235660   35580 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:52.235875   35580 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:23:52.236065   35580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:52.236083   35580 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:23:52.238890   35580 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:52.239274   35580 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:23:52.239309   35580 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:52.239492   35580 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:23:52.239680   35580 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:23:52.239830   35580 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:23:52.239972   35580 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:23:52.319216   35580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:52.335025   35580 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (5.421292496s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:23:53.150643   35664 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:23:53.150978   35664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:23:53.150992   35664 out.go:304] Setting ErrFile to fd 2...
	I0422 17:23:53.150998   35664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:23:53.151318   35664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:23:53.151502   35664 out.go:298] Setting JSON to false
	I0422 17:23:53.151526   35664 mustload.go:65] Loading cluster: ha-025067
	I0422 17:23:53.151751   35664 notify.go:220] Checking for updates...
	I0422 17:23:53.152764   35664 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:23:53.152793   35664 status.go:255] checking status of ha-025067 ...
	I0422 17:23:53.153964   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:53.154005   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:53.174930   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0422 17:23:53.175396   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:53.175990   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:53.176013   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:53.176457   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:53.176710   35664 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:23:53.178289   35664 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:23:53.178316   35664 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:23:53.178665   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:53.178706   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:53.194510   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46679
	I0422 17:23:53.194877   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:53.195393   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:53.195423   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:53.195747   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:53.195973   35664 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:23:53.198543   35664 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:53.198959   35664 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:23:53.198996   35664 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:53.199071   35664 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:23:53.199377   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:53.199412   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:53.214231   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45595
	I0422 17:23:53.214674   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:53.215156   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:53.215183   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:53.215556   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:53.215782   35664 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:23:53.216057   35664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:53.216079   35664 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:23:53.218748   35664 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:53.219146   35664 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:23:53.219175   35664 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:23:53.219325   35664 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:23:53.219495   35664 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:23:53.219628   35664 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:23:53.219746   35664 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:23:53.304357   35664 ssh_runner.go:195] Run: systemctl --version
	I0422 17:23:53.311084   35664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:53.326709   35664 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:23:53.326737   35664 api_server.go:166] Checking apiserver status ...
	I0422 17:23:53.326783   35664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:23:53.342229   35664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:23:53.352324   35664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:23:53.352379   35664 ssh_runner.go:195] Run: ls
	I0422 17:23:53.357173   35664 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:23:53.363449   35664 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:23:53.363479   35664 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:23:53.363492   35664 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:23:53.363517   35664 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:23:53.363894   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:53.363937   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:53.379747   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33821
	I0422 17:23:53.380159   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:53.380688   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:53.380706   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:53.380995   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:53.381179   35664 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:23:53.382890   35664 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:23:53.382908   35664 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:23:53.383206   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:53.383245   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:53.397576   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0422 17:23:53.398007   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:53.398445   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:53.398472   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:53.398796   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:53.398954   35664 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:23:53.401700   35664 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:53.402128   35664 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:23:53.402160   35664 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:53.402201   35664 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:23:53.402518   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:53.402551   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:53.417087   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39729
	I0422 17:23:53.417443   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:53.417924   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:53.417944   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:53.418259   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:53.418470   35664 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:23:53.418619   35664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:53.418639   35664 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:23:53.421164   35664 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:53.421613   35664 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:23:53.421638   35664 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:23:53.421811   35664 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:23:53.421987   35664 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:23:53.422126   35664 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:23:53.422269   35664 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:23:55.063515   35664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:55.063588   35664 retry.go:31] will retry after 261.509614ms: dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:23:58.139473   35664 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:23:58.139566   35664 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:23:58.139590   35664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:58.139603   35664 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:23:58.139628   35664 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:23:58.139641   35664 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:23:58.139986   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:58.140036   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:58.154658   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42795
	I0422 17:23:58.155115   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:58.155642   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:58.155664   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:58.155957   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:58.156176   35664 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:23:58.157694   35664 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:23:58.157713   35664 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:23:58.158104   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:58.158150   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:58.173082   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33505
	I0422 17:23:58.173509   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:58.173959   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:58.173995   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:58.174330   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:58.174528   35664 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:23:58.177408   35664 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:58.177858   35664 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:23:58.177882   35664 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:58.178054   35664 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:23:58.178345   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:58.178377   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:58.193091   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33177
	I0422 17:23:58.193568   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:58.194017   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:58.194051   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:58.194398   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:58.194655   35664 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:23:58.194855   35664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:58.194874   35664 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:23:58.198055   35664 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:58.198514   35664 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:23:58.198540   35664 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:23:58.198702   35664 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:23:58.198879   35664 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:23:58.199029   35664 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:23:58.199175   35664 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:23:58.280413   35664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:58.300284   35664 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:23:58.300314   35664 api_server.go:166] Checking apiserver status ...
	I0422 17:23:58.300353   35664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:23:58.324796   35664 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:23:58.345004   35664 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:23:58.345078   35664 ssh_runner.go:195] Run: ls
	I0422 17:23:58.353114   35664 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:23:58.360947   35664 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:23:58.360970   35664 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:23:58.360982   35664 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:23:58.361000   35664 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:23:58.361296   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:58.361362   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:58.375893   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0422 17:23:58.376355   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:58.376801   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:58.376832   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:58.377179   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:58.377357   35664 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:23:58.378990   35664 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:23:58.379006   35664 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:23:58.379306   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:58.379359   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:58.393790   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0422 17:23:58.394224   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:58.394733   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:58.394753   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:58.395041   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:58.395213   35664 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:23:58.398104   35664 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:58.398475   35664 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:23:58.398505   35664 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:58.398660   35664 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:23:58.399009   35664 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:23:58.399055   35664 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:23:58.413560   35664 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0422 17:23:58.413974   35664 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:23:58.414497   35664 main.go:141] libmachine: Using API Version  1
	I0422 17:23:58.414524   35664 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:23:58.414850   35664 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:23:58.415024   35664 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:23:58.415233   35664 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:23:58.415256   35664 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:23:58.418055   35664 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:58.418503   35664 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:23:58.418541   35664 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:23:58.418671   35664 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:23:58.418872   35664 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:23:58.419054   35664 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:23:58.419204   35664 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:23:58.500185   35664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:23:58.515982   35664 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
E0422 17:24:02.847801   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (4.513619649s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:24:00.520551   35779 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:24:00.520650   35779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:00.520655   35779 out.go:304] Setting ErrFile to fd 2...
	I0422 17:24:00.520659   35779 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:00.520838   35779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:24:00.521006   35779 out.go:298] Setting JSON to false
	I0422 17:24:00.521030   35779 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:00.521198   35779 notify.go:220] Checking for updates...
	I0422 17:24:00.521402   35779 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:00.521416   35779 status.go:255] checking status of ha-025067 ...
	I0422 17:24:00.521798   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:00.521851   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:00.541079   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39129
	I0422 17:24:00.541570   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:00.542168   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:00.542190   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:00.542638   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:00.542870   35779 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:24:00.544616   35779 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:24:00.544639   35779 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:00.544966   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:00.545022   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:00.560575   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46223
	I0422 17:24:00.560930   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:00.561443   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:00.561471   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:00.561786   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:00.562038   35779 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:24:00.565104   35779 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:00.565477   35779 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:00.565497   35779 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:00.565831   35779 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:00.566130   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:00.566166   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:00.580731   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0422 17:24:00.581156   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:00.581641   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:00.581667   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:00.582067   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:00.582255   35779 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:24:00.582469   35779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:00.582491   35779 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:24:00.585164   35779 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:00.585562   35779 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:00.585582   35779 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:00.585764   35779 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:24:00.585956   35779 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:24:00.586091   35779 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:24:00.586218   35779 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:24:00.671886   35779 ssh_runner.go:195] Run: systemctl --version
	I0422 17:24:00.678231   35779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:00.696843   35779 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:00.696871   35779 api_server.go:166] Checking apiserver status ...
	I0422 17:24:00.696903   35779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:00.713086   35779 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:24:00.724204   35779 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:00.724257   35779 ssh_runner.go:195] Run: ls
	I0422 17:24:00.729615   35779 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:00.734324   35779 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:00.734350   35779 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:24:00.734362   35779 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:00.734381   35779 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:24:00.734793   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:00.734851   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:00.750748   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35423
	I0422 17:24:00.751282   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:00.751790   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:00.751810   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:00.752189   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:00.752424   35779 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:24:00.753934   35779 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:24:00.753959   35779 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:00.754303   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:00.754337   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:00.768961   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34007
	I0422 17:24:00.769398   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:00.769889   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:00.769919   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:00.770277   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:00.770507   35779 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:24:00.773161   35779 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:00.773633   35779 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:00.773670   35779 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:00.773794   35779 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:00.774231   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:00.774284   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:00.789363   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39163
	I0422 17:24:00.789808   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:00.790316   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:00.790336   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:00.790643   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:00.790824   35779 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:24:00.790993   35779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:00.791012   35779 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:24:00.793748   35779 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:00.794166   35779 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:00.794195   35779 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:00.794318   35779 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:24:00.794522   35779 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:24:00.794661   35779 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:24:00.794774   35779 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:24:01.207431   35779 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:01.207494   35779 retry.go:31] will retry after 331.864026ms: dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:24:04.599446   35779 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:24:04.599571   35779 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:24:04.599593   35779 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:04.599601   35779 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:24:04.599623   35779 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:04.599632   35779 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:24:04.600096   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:04.600148   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:04.615697   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37431
	I0422 17:24:04.616125   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:04.616598   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:04.616625   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:04.616980   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:04.617163   35779 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:24:04.618880   35779 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:24:04.618899   35779 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:04.619239   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:04.619300   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:04.639341   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38173
	I0422 17:24:04.639766   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:04.640288   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:04.640315   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:04.640739   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:04.640978   35779 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:24:04.644279   35779 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:04.644783   35779 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:04.644810   35779 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:04.645002   35779 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:04.645404   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:04.645444   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:04.659923   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0422 17:24:04.660316   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:04.660753   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:04.660783   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:04.661099   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:04.661267   35779 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:24:04.661437   35779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:04.661456   35779 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:24:04.664114   35779 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:04.664517   35779 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:04.664558   35779 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:04.664746   35779 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:24:04.664893   35779 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:24:04.665064   35779 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:24:04.665193   35779 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:24:04.752458   35779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:04.771104   35779 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:04.771155   35779 api_server.go:166] Checking apiserver status ...
	I0422 17:24:04.771200   35779 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:04.788818   35779 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:24:04.800492   35779 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:04.800558   35779 ssh_runner.go:195] Run: ls
	I0422 17:24:04.805880   35779 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:04.810282   35779 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:04.810336   35779 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:24:04.810349   35779 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:04.810365   35779 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:24:04.810738   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:04.810780   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:04.826280   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0422 17:24:04.826700   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:04.827242   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:04.827292   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:04.827599   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:04.827850   35779 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:04.829393   35779 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:24:04.829411   35779 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:04.829691   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:04.829722   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:04.845609   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0422 17:24:04.846063   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:04.846600   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:04.846626   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:04.846985   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:04.847262   35779 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:24:04.850735   35779 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:04.851280   35779 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:04.851524   35779 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:04.851521   35779 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:04.851854   35779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:04.851897   35779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:04.866874   35779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45011
	I0422 17:24:04.867378   35779 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:04.867865   35779 main.go:141] libmachine: Using API Version  1
	I0422 17:24:04.867892   35779 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:04.868168   35779 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:04.868444   35779 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:24:04.868727   35779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:04.868748   35779 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:24:04.871688   35779 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:04.872085   35779 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:04.872104   35779 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:04.872300   35779 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:24:04.872469   35779 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:24:04.872609   35779 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:24:04.872717   35779 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:24:04.956365   35779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:04.973130   35779 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (3.751479311s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:24:08.209285   35879 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:24:08.209777   35879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:08.209833   35879 out.go:304] Setting ErrFile to fd 2...
	I0422 17:24:08.209851   35879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:08.210330   35879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:24:08.210665   35879 out.go:298] Setting JSON to false
	I0422 17:24:08.210695   35879 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:08.210918   35879 notify.go:220] Checking for updates...
	I0422 17:24:08.211781   35879 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:08.211809   35879 status.go:255] checking status of ha-025067 ...
	I0422 17:24:08.212178   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:08.212215   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:08.231936   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41423
	I0422 17:24:08.232392   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:08.232916   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:08.232943   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:08.233349   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:08.233737   35879 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:24:08.235268   35879 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:24:08.235289   35879 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:08.235560   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:08.235596   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:08.249931   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0422 17:24:08.250263   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:08.250754   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:08.250769   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:08.251056   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:08.251256   35879 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:24:08.253968   35879 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:08.254368   35879 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:08.254396   35879 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:08.254555   35879 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:08.254870   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:08.254930   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:08.269275   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40921
	I0422 17:24:08.269760   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:08.270246   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:08.270270   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:08.270567   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:08.270810   35879 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:24:08.270987   35879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:08.271021   35879 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:24:08.274063   35879 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:08.274519   35879 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:08.274546   35879 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:08.274714   35879 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:24:08.274879   35879 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:24:08.275072   35879 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:24:08.275257   35879 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:24:08.363342   35879 ssh_runner.go:195] Run: systemctl --version
	I0422 17:24:08.369829   35879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:08.385531   35879 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:08.385558   35879 api_server.go:166] Checking apiserver status ...
	I0422 17:24:08.385585   35879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:08.401422   35879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:24:08.411586   35879 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:08.411630   35879 ssh_runner.go:195] Run: ls
	I0422 17:24:08.416598   35879 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:08.420970   35879 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:08.420992   35879 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:24:08.421004   35879 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:08.421023   35879 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:24:08.421310   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:08.421351   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:08.436201   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43541
	I0422 17:24:08.436624   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:08.437069   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:08.437088   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:08.437365   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:08.437526   35879 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:24:08.438945   35879 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:24:08.438962   35879 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:08.439300   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:08.439334   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:08.453773   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43769
	I0422 17:24:08.454166   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:08.454626   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:08.454658   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:08.454949   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:08.455176   35879 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:24:08.457598   35879 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:08.457989   35879 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:08.458027   35879 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:08.458172   35879 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:08.458629   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:08.458667   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:08.472658   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I0422 17:24:08.473069   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:08.473596   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:08.473621   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:08.473938   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:08.474102   35879 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:24:08.474294   35879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:08.474315   35879 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:24:08.477056   35879 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:08.477496   35879 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:08.477531   35879 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:08.477715   35879 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:24:08.477892   35879 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:24:08.478017   35879 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:24:08.478160   35879 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:24:11.543345   35879 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:24:11.543448   35879 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:24:11.543480   35879 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:11.543492   35879 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:24:11.543516   35879 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:11.543544   35879 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:24:11.543862   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:11.543933   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:11.558748   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0422 17:24:11.559195   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:11.559672   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:11.559696   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:11.560018   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:11.560222   35879 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:24:11.561992   35879 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:24:11.562009   35879 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:11.562305   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:11.562349   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:11.577543   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37907
	I0422 17:24:11.578064   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:11.578613   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:11.578639   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:11.578985   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:11.579265   35879 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:24:11.582177   35879 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:11.582708   35879 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:11.582739   35879 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:11.582942   35879 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:11.583392   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:11.583445   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:11.599885   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0422 17:24:11.600286   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:11.600777   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:11.600806   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:11.601106   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:11.601313   35879 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:24:11.601493   35879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:11.601515   35879 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:24:11.604662   35879 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:11.605124   35879 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:11.605150   35879 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:11.605304   35879 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:24:11.605513   35879 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:24:11.605681   35879 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:24:11.605808   35879 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:24:11.687385   35879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:11.706395   35879 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:11.706424   35879 api_server.go:166] Checking apiserver status ...
	I0422 17:24:11.706464   35879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:11.721903   35879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:24:11.733056   35879 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:11.733111   35879 ssh_runner.go:195] Run: ls
	I0422 17:24:11.741587   35879 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:11.746570   35879 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:11.746594   35879 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:24:11.746603   35879 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:11.746616   35879 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:24:11.746882   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:11.746921   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:11.763500   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33169
	I0422 17:24:11.763970   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:11.764423   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:11.764451   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:11.764813   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:11.764997   35879 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:11.766458   35879 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:24:11.766476   35879 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:11.766840   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:11.766881   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:11.781347   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0422 17:24:11.781748   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:11.782181   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:11.782204   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:11.782562   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:11.782802   35879 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:24:11.785596   35879 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:11.785996   35879 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:11.786034   35879 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:11.786168   35879 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:11.786456   35879 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:11.786488   35879 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:11.802516   35879 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0422 17:24:11.802905   35879 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:11.803430   35879 main.go:141] libmachine: Using API Version  1
	I0422 17:24:11.803461   35879 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:11.803771   35879 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:11.803938   35879 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:24:11.804126   35879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:11.804152   35879 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:24:11.806741   35879 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:11.807113   35879 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:11.807156   35879 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:11.807269   35879 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:24:11.807413   35879 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:24:11.807518   35879 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:24:11.807625   35879 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:24:11.887045   35879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:11.904409   35879 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (4.544096499s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:24:13.713376   35996 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:24:13.713604   35996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:13.713613   35996 out.go:304] Setting ErrFile to fd 2...
	I0422 17:24:13.713617   35996 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:13.713788   35996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:24:13.713950   35996 out.go:298] Setting JSON to false
	I0422 17:24:13.713979   35996 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:13.714037   35996 notify.go:220] Checking for updates...
	I0422 17:24:13.714358   35996 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:13.714372   35996 status.go:255] checking status of ha-025067 ...
	I0422 17:24:13.714739   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:13.714785   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:13.731497   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0422 17:24:13.731964   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:13.732552   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:13.732582   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:13.732933   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:13.733108   35996 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:24:13.734652   35996 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:24:13.734678   35996 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:13.734982   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:13.735036   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:13.750721   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
	I0422 17:24:13.751093   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:13.751726   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:13.751759   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:13.752082   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:13.752269   35996 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:24:13.754888   35996 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:13.755297   35996 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:13.755315   35996 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:13.755452   35996 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:13.755741   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:13.755788   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:13.771080   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35127
	I0422 17:24:13.771518   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:13.771919   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:13.771940   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:13.772225   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:13.772361   35996 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:24:13.772562   35996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:13.772591   35996 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:24:13.775085   35996 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:13.775481   35996 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:13.775506   35996 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:13.775625   35996 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:24:13.775811   35996 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:24:13.775965   35996 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:24:13.776107   35996 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:24:13.859554   35996 ssh_runner.go:195] Run: systemctl --version
	I0422 17:24:13.865912   35996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:13.881040   35996 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:13.881066   35996 api_server.go:166] Checking apiserver status ...
	I0422 17:24:13.881096   35996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:13.896586   35996 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:24:13.906660   35996 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:13.906720   35996 ssh_runner.go:195] Run: ls
	I0422 17:24:13.912110   35996 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:13.916240   35996 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:13.916266   35996 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:24:13.916276   35996 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:13.916291   35996 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:24:13.916582   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:13.916621   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:13.934006   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0422 17:24:13.934430   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:13.934902   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:13.934922   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:13.935296   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:13.935500   35996 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:24:13.937099   35996 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:24:13.937115   35996 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:13.937436   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:13.937481   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:13.953575   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40431
	I0422 17:24:13.953974   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:13.954429   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:13.954449   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:13.954732   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:13.954938   35996 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:24:13.957736   35996 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:13.958227   35996 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:13.958249   35996 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:13.958427   35996 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:13.958749   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:13.958798   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:13.974093   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34845
	I0422 17:24:13.974505   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:13.974989   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:13.975014   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:13.975360   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:13.975540   35996 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:24:13.975739   35996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:13.975761   35996 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:24:13.978602   35996 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:13.978985   35996 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:13.979016   35996 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:13.979184   35996 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:24:13.979354   35996 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:24:13.979517   35996 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:24:13.979668   35996 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:24:14.619303   35996 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:14.619345   35996 retry.go:31] will retry after 165.115996ms: dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:24:17.847380   35996 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:24:17.847473   35996 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:24:17.847495   35996 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:17.847507   35996 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:24:17.847554   35996 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:17.847565   35996 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:24:17.847905   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:17.847974   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:17.863566   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34165
	I0422 17:24:17.863971   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:17.864381   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:17.864402   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:17.864728   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:17.864936   35996 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:24:17.866499   35996 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:24:17.866514   35996 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:17.866878   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:17.866933   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:17.881422   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0422 17:24:17.882193   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:17.883502   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:17.883525   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:17.883867   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:17.884071   35996 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:24:17.886749   35996 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:17.887107   35996 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:17.887153   35996 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:17.887288   35996 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:17.887599   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:17.887635   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:17.901773   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0422 17:24:17.902192   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:17.902601   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:17.902621   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:17.902883   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:17.903048   35996 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:24:17.903240   35996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:17.903264   35996 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:24:17.905930   35996 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:17.906351   35996 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:17.906373   35996 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:17.906503   35996 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:24:17.906658   35996 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:24:17.906766   35996 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:24:17.906893   35996 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:24:17.990834   35996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:18.006142   35996 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:18.006168   35996 api_server.go:166] Checking apiserver status ...
	I0422 17:24:18.006198   35996 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:18.019357   35996 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:24:18.030015   35996 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:18.030065   35996 ssh_runner.go:195] Run: ls
	I0422 17:24:18.034601   35996 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:18.042978   35996 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:18.043006   35996 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:24:18.043017   35996 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:18.043037   35996 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:24:18.043468   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:18.043517   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:18.058916   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I0422 17:24:18.059344   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:18.059838   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:18.059860   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:18.060180   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:18.060340   35996 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:18.061954   35996 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:24:18.061974   35996 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:18.062246   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:18.062278   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:18.080093   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44417
	I0422 17:24:18.080532   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:18.081058   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:18.081089   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:18.081464   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:18.081690   35996 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:24:18.084868   35996 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:18.085326   35996 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:18.085360   35996 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:18.085600   35996 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:18.085987   35996 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:18.086030   35996 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:18.101580   35996 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I0422 17:24:18.101997   35996 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:18.102507   35996 main.go:141] libmachine: Using API Version  1
	I0422 17:24:18.102529   35996 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:18.102822   35996 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:18.103029   35996 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:24:18.103243   35996 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:18.103265   35996 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:24:18.106076   35996 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:18.106529   35996 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:18.106559   35996 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:18.106675   35996 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:24:18.106820   35996 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:24:18.106951   35996 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:24:18.107074   35996 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:24:18.187051   35996 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:18.202952   35996 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (3.730615503s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:24:21.504425   36096 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:24:21.504537   36096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:21.504548   36096 out.go:304] Setting ErrFile to fd 2...
	I0422 17:24:21.504554   36096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:21.504763   36096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:24:21.504970   36096 out.go:298] Setting JSON to false
	I0422 17:24:21.504999   36096 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:21.505056   36096 notify.go:220] Checking for updates...
	I0422 17:24:21.505446   36096 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:21.505466   36096 status.go:255] checking status of ha-025067 ...
	I0422 17:24:21.505860   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:21.505924   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:21.522346   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0422 17:24:21.522780   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:21.523424   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:21.523476   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:21.523878   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:21.524106   36096 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:24:21.525683   36096 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:24:21.525711   36096 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:21.526018   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:21.526060   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:21.541605   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40529
	I0422 17:24:21.542006   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:21.542494   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:21.542548   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:21.542931   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:21.543176   36096 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:24:21.546130   36096 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:21.546602   36096 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:21.546631   36096 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:21.546771   36096 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:21.547067   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:21.547143   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:21.562489   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0422 17:24:21.562866   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:21.563291   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:21.563323   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:21.563631   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:21.563794   36096 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:24:21.563986   36096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:21.564010   36096 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:24:21.566350   36096 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:21.566722   36096 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:21.566747   36096 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:21.566877   36096 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:24:21.567029   36096 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:24:21.567178   36096 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:24:21.567333   36096 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:24:21.655053   36096 ssh_runner.go:195] Run: systemctl --version
	I0422 17:24:21.661335   36096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:21.675754   36096 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:21.675781   36096 api_server.go:166] Checking apiserver status ...
	I0422 17:24:21.675822   36096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:21.690106   36096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:24:21.699999   36096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:21.700057   36096 ssh_runner.go:195] Run: ls
	I0422 17:24:21.704644   36096 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:21.708939   36096 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:21.708962   36096 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:24:21.708972   36096 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:21.708986   36096 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:24:21.709303   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:21.709344   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:21.724353   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I0422 17:24:21.724844   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:21.725309   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:21.725331   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:21.725649   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:21.725882   36096 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:24:21.727356   36096 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:24:21.727372   36096 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:21.727705   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:21.727740   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:21.742968   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32837
	I0422 17:24:21.743472   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:21.744009   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:21.744046   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:21.744322   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:21.744506   36096 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:24:21.747275   36096 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:21.747711   36096 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:21.747741   36096 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:21.747907   36096 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:24:21.748292   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:21.748326   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:21.763499   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44687
	I0422 17:24:21.763846   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:21.764314   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:21.764334   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:21.764686   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:21.764898   36096 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:24:21.765095   36096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:21.765114   36096 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:24:21.768083   36096 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:21.768482   36096 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:24:21.768517   36096 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:24:21.768703   36096 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:24:21.768884   36096 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:24:21.769032   36096 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:24:21.769227   36096 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	W0422 17:24:24.823337   36096 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.56:22: connect: no route to host
	W0422 17:24:24.823413   36096 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	E0422 17:24:24.823426   36096 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:24.823441   36096 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:24:24.823458   36096 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.56:22: connect: no route to host
	I0422 17:24:24.823465   36096 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:24:24.823792   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:24.823843   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:24.839292   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35257
	I0422 17:24:24.839728   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:24.840177   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:24.840210   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:24.840503   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:24.840716   36096 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:24:24.842473   36096 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:24:24.842491   36096 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:24.842922   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:24.842994   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:24.857753   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37037
	I0422 17:24:24.858173   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:24.858703   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:24.858733   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:24.859023   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:24.859220   36096 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:24:24.862082   36096 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:24.862525   36096 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:24.862562   36096 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:24.862703   36096 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:24.863530   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:24.863577   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:24.879275   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40851
	I0422 17:24:24.879792   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:24.880234   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:24.880258   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:24.880535   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:24.880712   36096 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:24:24.880867   36096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:24.880888   36096 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:24:24.883903   36096 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:24.884443   36096 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:24.884469   36096 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:24.884590   36096 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:24:24.884767   36096 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:24:24.884924   36096 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:24:24.885030   36096 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:24:24.967746   36096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:24.987688   36096 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:24.987723   36096 api_server.go:166] Checking apiserver status ...
	I0422 17:24:24.987765   36096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:25.000859   36096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:24:25.010794   36096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:25.010856   36096 ssh_runner.go:195] Run: ls
	I0422 17:24:25.016029   36096 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:25.020542   36096 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:25.020565   36096 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:24:25.020573   36096 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:25.020590   36096 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:24:25.020889   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:25.020939   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:25.035685   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0422 17:24:25.036158   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:25.036704   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:25.036728   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:25.037052   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:25.037220   36096 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:25.038779   36096 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:24:25.038796   36096 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:25.039061   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:25.039090   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:25.054243   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42599
	I0422 17:24:25.054641   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:25.055218   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:25.055239   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:25.055531   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:25.055698   36096 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:24:25.058871   36096 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:25.059271   36096 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:25.059306   36096 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:25.059520   36096 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:25.059875   36096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:25.059924   36096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:25.076022   36096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36021
	I0422 17:24:25.076563   36096 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:25.077022   36096 main.go:141] libmachine: Using API Version  1
	I0422 17:24:25.077038   36096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:25.077437   36096 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:25.077617   36096 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:24:25.077818   36096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:25.077836   36096 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:24:25.080917   36096 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:25.081443   36096 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:25.081469   36096 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:25.081620   36096 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:24:25.081785   36096 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:24:25.081938   36096 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:24:25.082058   36096 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:24:25.163198   36096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:25.179569   36096 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 7 (637.715228ms)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-025067-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:24:34.204882   36248 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:24:34.204996   36248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:34.205009   36248 out.go:304] Setting ErrFile to fd 2...
	I0422 17:24:34.205018   36248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:34.205208   36248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:24:34.205370   36248 out.go:298] Setting JSON to false
	I0422 17:24:34.205393   36248 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:34.205449   36248 notify.go:220] Checking for updates...
	I0422 17:24:34.205765   36248 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:34.205778   36248 status.go:255] checking status of ha-025067 ...
	I0422 17:24:34.206144   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.206198   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.222913   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34047
	I0422 17:24:34.223315   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.223871   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.223907   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.224201   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.224377   36248 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:24:34.226273   36248 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:24:34.226298   36248 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:34.226584   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.226624   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.240874   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35499
	I0422 17:24:34.241257   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.241794   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.241821   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.242173   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.242363   36248 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:24:34.245296   36248 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:34.245747   36248 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:34.245788   36248 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:34.245917   36248 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:24:34.246203   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.246237   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.261583   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43975
	I0422 17:24:34.261949   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.262410   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.262439   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.262752   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.262953   36248 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:24:34.263185   36248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:34.263218   36248 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:24:34.265852   36248 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:34.266378   36248 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:24:34.266414   36248 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:24:34.266538   36248 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:24:34.266719   36248 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:24:34.266872   36248 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:24:34.267011   36248 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:24:34.356163   36248 ssh_runner.go:195] Run: systemctl --version
	I0422 17:24:34.362925   36248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:34.379161   36248 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:34.379194   36248 api_server.go:166] Checking apiserver status ...
	I0422 17:24:34.379228   36248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:34.396211   36248 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup
	W0422 17:24:34.406170   36248 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1162/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:34.406248   36248 ssh_runner.go:195] Run: ls
	I0422 17:24:34.410748   36248 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:34.414882   36248 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:34.414901   36248 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:24:34.414918   36248 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:34.414932   36248 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:24:34.415232   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.415264   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.429733   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36497
	I0422 17:24:34.430125   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.430593   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.430616   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.430902   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.431087   36248 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:24:34.432847   36248 status.go:330] ha-025067-m02 host status = "Stopped" (err=<nil>)
	I0422 17:24:34.432865   36248 status.go:343] host is not running, skipping remaining checks
	I0422 17:24:34.432881   36248 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:34.432901   36248 status.go:255] checking status of ha-025067-m03 ...
	I0422 17:24:34.433188   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.433245   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.447613   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I0422 17:24:34.448050   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.448516   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.448540   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.448872   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.449045   36248 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:24:34.450868   36248 status.go:330] ha-025067-m03 host status = "Running" (err=<nil>)
	I0422 17:24:34.450884   36248 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:34.451248   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.451284   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.467558   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0422 17:24:34.468089   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.468610   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.468636   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.468934   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.469150   36248 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:24:34.471600   36248 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:34.472011   36248 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:34.472039   36248 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:34.472145   36248 host.go:66] Checking if "ha-025067-m03" exists ...
	I0422 17:24:34.472468   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.472497   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.487803   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0422 17:24:34.488209   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.488644   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.488663   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.489005   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.489174   36248 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:24:34.489342   36248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:34.489359   36248 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:24:34.492372   36248 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:34.492829   36248 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:34.492866   36248 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:34.493018   36248 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:24:34.493219   36248 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:24:34.493375   36248 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:24:34.493514   36248 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:24:34.576334   36248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:34.592720   36248 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:24:34.592745   36248 api_server.go:166] Checking apiserver status ...
	I0422 17:24:34.592790   36248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:24:34.607096   36248 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup
	W0422 17:24:34.617916   36248 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1595/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:24:34.617992   36248 ssh_runner.go:195] Run: ls
	I0422 17:24:34.624403   36248 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:24:34.628806   36248 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:24:34.628834   36248 status.go:422] ha-025067-m03 apiserver status = Running (err=<nil>)
	I0422 17:24:34.628846   36248 status.go:257] ha-025067-m03 status: &{Name:ha-025067-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:24:34.628865   36248 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:24:34.629213   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.629260   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.644280   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44799
	I0422 17:24:34.644627   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.645079   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.645102   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.645424   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.645594   36248 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:34.647289   36248 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:24:34.647307   36248 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:34.647675   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.647720   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.662505   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42227
	I0422 17:24:34.662918   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.663378   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.663400   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.663717   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.663887   36248 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:24:34.666640   36248 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:34.667066   36248 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:34.667088   36248 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:34.667214   36248 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:24:34.667520   36248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:34.667563   36248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:34.682969   36248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44289
	I0422 17:24:34.683419   36248 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:34.683921   36248 main.go:141] libmachine: Using API Version  1
	I0422 17:24:34.683947   36248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:34.684298   36248 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:34.684495   36248 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:24:34.684698   36248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:24:34.684725   36248 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:24:34.687484   36248 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:34.688181   36248 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:34.688211   36248 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:34.688335   36248 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:24:34.688525   36248 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:24:34.688703   36248 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:24:34.688839   36248 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:24:34.771070   36248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:24:34.787619   36248 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-025067 -n ha-025067
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-025067 logs -n 25: (1.581654823s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067:/home/docker/cp-test_ha-025067-m03_ha-025067.txt                      |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067 sudo cat                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067.txt                                |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m04 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp testdata/cp-test.txt                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067:/home/docker/cp-test_ha-025067-m04_ha-025067.txt                      |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067 sudo cat                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067.txt                                |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03:/home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m03 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-025067 node stop m02 -v=7                                                    | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-025067 node start m02 -v=7                                                   | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:23 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:16:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:16:52.541957   30338 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:16:52.542113   30338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:16:52.542124   30338 out.go:304] Setting ErrFile to fd 2...
	I0422 17:16:52.542131   30338 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:16:52.542370   30338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:16:52.542997   30338 out.go:298] Setting JSON to false
	I0422 17:16:52.543963   30338 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3558,"bootTime":1713802655,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:16:52.544023   30338 start.go:139] virtualization: kvm guest
	I0422 17:16:52.546239   30338 out.go:177] * [ha-025067] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:16:52.547926   30338 notify.go:220] Checking for updates...
	I0422 17:16:52.549163   30338 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:16:52.550487   30338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:16:52.551790   30338 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:16:52.552990   30338 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:16:52.554110   30338 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:16:52.555258   30338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:16:52.556755   30338 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:16:52.591545   30338 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 17:16:52.592934   30338 start.go:297] selected driver: kvm2
	I0422 17:16:52.592952   30338 start.go:901] validating driver "kvm2" against <nil>
	I0422 17:16:52.592970   30338 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:16:52.593731   30338 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:16:52.593822   30338 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 17:16:52.608623   30338 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 17:16:52.608678   30338 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 17:16:52.608883   30338 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:16:52.608934   30338 cni.go:84] Creating CNI manager for ""
	I0422 17:16:52.608946   30338 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0422 17:16:52.608953   30338 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0422 17:16:52.609003   30338 start.go:340] cluster config:
	{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0422 17:16:52.609091   30338 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:16:52.611390   30338 out.go:177] * Starting "ha-025067" primary control-plane node in "ha-025067" cluster
	I0422 17:16:52.612836   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:16:52.612868   30338 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 17:16:52.612876   30338 cache.go:56] Caching tarball of preloaded images
	I0422 17:16:52.612948   30338 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:16:52.612959   30338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:16:52.613259   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:16:52.613279   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json: {Name:mkfe9ab9288b859a19abb2db630c3d4dba4d6aff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:16:52.613408   30338 start.go:360] acquireMachinesLock for ha-025067: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:16:52.613435   30338 start.go:364] duration metric: took 15.159µs to acquireMachinesLock for "ha-025067"
	I0422 17:16:52.613450   30338 start.go:93] Provisioning new machine with config: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:16:52.613508   30338 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 17:16:52.616169   30338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 17:16:52.616330   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:16:52.616365   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:16:52.630862   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0422 17:16:52.631320   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:16:52.631823   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:16:52.631846   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:16:52.632177   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:16:52.632356   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:16:52.632507   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:16:52.632625   30338 start.go:159] libmachine.API.Create for "ha-025067" (driver="kvm2")
	I0422 17:16:52.632657   30338 client.go:168] LocalClient.Create starting
	I0422 17:16:52.632693   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 17:16:52.632726   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:16:52.632744   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:16:52.632797   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 17:16:52.632815   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:16:52.632829   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:16:52.632844   30338 main.go:141] libmachine: Running pre-create checks...
	I0422 17:16:52.632856   30338 main.go:141] libmachine: (ha-025067) Calling .PreCreateCheck
	I0422 17:16:52.633193   30338 main.go:141] libmachine: (ha-025067) Calling .GetConfigRaw
	I0422 17:16:52.633530   30338 main.go:141] libmachine: Creating machine...
	I0422 17:16:52.633544   30338 main.go:141] libmachine: (ha-025067) Calling .Create
	I0422 17:16:52.633656   30338 main.go:141] libmachine: (ha-025067) Creating KVM machine...
	I0422 17:16:52.634912   30338 main.go:141] libmachine: (ha-025067) DBG | found existing default KVM network
	I0422 17:16:52.635784   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:52.635601   30361 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d980}
	I0422 17:16:52.635809   30338 main.go:141] libmachine: (ha-025067) DBG | created network xml: 
	I0422 17:16:52.635822   30338 main.go:141] libmachine: (ha-025067) DBG | <network>
	I0422 17:16:52.635834   30338 main.go:141] libmachine: (ha-025067) DBG |   <name>mk-ha-025067</name>
	I0422 17:16:52.635843   30338 main.go:141] libmachine: (ha-025067) DBG |   <dns enable='no'/>
	I0422 17:16:52.635861   30338 main.go:141] libmachine: (ha-025067) DBG |   
	I0422 17:16:52.635875   30338 main.go:141] libmachine: (ha-025067) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 17:16:52.635892   30338 main.go:141] libmachine: (ha-025067) DBG |     <dhcp>
	I0422 17:16:52.635921   30338 main.go:141] libmachine: (ha-025067) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 17:16:52.635943   30338 main.go:141] libmachine: (ha-025067) DBG |     </dhcp>
	I0422 17:16:52.635958   30338 main.go:141] libmachine: (ha-025067) DBG |   </ip>
	I0422 17:16:52.635968   30338 main.go:141] libmachine: (ha-025067) DBG |   
	I0422 17:16:52.635977   30338 main.go:141] libmachine: (ha-025067) DBG | </network>
	I0422 17:16:52.635985   30338 main.go:141] libmachine: (ha-025067) DBG | 
	I0422 17:16:52.641459   30338 main.go:141] libmachine: (ha-025067) DBG | trying to create private KVM network mk-ha-025067 192.168.39.0/24...
	I0422 17:16:52.705304   30338 main.go:141] libmachine: (ha-025067) DBG | private KVM network mk-ha-025067 192.168.39.0/24 created
	I0422 17:16:52.705336   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:52.705269   30361 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:16:52.705349   30338 main.go:141] libmachine: (ha-025067) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067 ...
	I0422 17:16:52.705365   30338 main.go:141] libmachine: (ha-025067) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 17:16:52.705520   30338 main.go:141] libmachine: (ha-025067) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 17:16:52.932516   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:52.932370   30361 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa...
	I0422 17:16:53.020479   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:53.020310   30361 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/ha-025067.rawdisk...
	I0422 17:16:53.020535   30338 main.go:141] libmachine: (ha-025067) DBG | Writing magic tar header
	I0422 17:16:53.020550   30338 main.go:141] libmachine: (ha-025067) DBG | Writing SSH key tar header
	I0422 17:16:53.020563   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:53.020480   30361 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067 ...
	I0422 17:16:53.020642   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067
	I0422 17:16:53.020680   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067 (perms=drwx------)
	I0422 17:16:53.020688   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 17:16:53.020695   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 17:16:53.020708   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 17:16:53.020722   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 17:16:53.020738   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 17:16:53.020748   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:16:53.020757   30338 main.go:141] libmachine: (ha-025067) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 17:16:53.020771   30338 main.go:141] libmachine: (ha-025067) Creating domain...
	I0422 17:16:53.020780   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 17:16:53.020788   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 17:16:53.020800   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home/jenkins
	I0422 17:16:53.020829   30338 main.go:141] libmachine: (ha-025067) DBG | Checking permissions on dir: /home
	I0422 17:16:53.020852   30338 main.go:141] libmachine: (ha-025067) DBG | Skipping /home - not owner
	I0422 17:16:53.021802   30338 main.go:141] libmachine: (ha-025067) define libvirt domain using xml: 
	I0422 17:16:53.021819   30338 main.go:141] libmachine: (ha-025067) <domain type='kvm'>
	I0422 17:16:53.021828   30338 main.go:141] libmachine: (ha-025067)   <name>ha-025067</name>
	I0422 17:16:53.021835   30338 main.go:141] libmachine: (ha-025067)   <memory unit='MiB'>2200</memory>
	I0422 17:16:53.021843   30338 main.go:141] libmachine: (ha-025067)   <vcpu>2</vcpu>
	I0422 17:16:53.021850   30338 main.go:141] libmachine: (ha-025067)   <features>
	I0422 17:16:53.021859   30338 main.go:141] libmachine: (ha-025067)     <acpi/>
	I0422 17:16:53.021867   30338 main.go:141] libmachine: (ha-025067)     <apic/>
	I0422 17:16:53.021880   30338 main.go:141] libmachine: (ha-025067)     <pae/>
	I0422 17:16:53.021916   30338 main.go:141] libmachine: (ha-025067)     
	I0422 17:16:53.021924   30338 main.go:141] libmachine: (ha-025067)   </features>
	I0422 17:16:53.021936   30338 main.go:141] libmachine: (ha-025067)   <cpu mode='host-passthrough'>
	I0422 17:16:53.021954   30338 main.go:141] libmachine: (ha-025067)   
	I0422 17:16:53.021962   30338 main.go:141] libmachine: (ha-025067)   </cpu>
	I0422 17:16:53.021967   30338 main.go:141] libmachine: (ha-025067)   <os>
	I0422 17:16:53.021975   30338 main.go:141] libmachine: (ha-025067)     <type>hvm</type>
	I0422 17:16:53.022007   30338 main.go:141] libmachine: (ha-025067)     <boot dev='cdrom'/>
	I0422 17:16:53.022034   30338 main.go:141] libmachine: (ha-025067)     <boot dev='hd'/>
	I0422 17:16:53.022046   30338 main.go:141] libmachine: (ha-025067)     <bootmenu enable='no'/>
	I0422 17:16:53.022056   30338 main.go:141] libmachine: (ha-025067)   </os>
	I0422 17:16:53.022063   30338 main.go:141] libmachine: (ha-025067)   <devices>
	I0422 17:16:53.022074   30338 main.go:141] libmachine: (ha-025067)     <disk type='file' device='cdrom'>
	I0422 17:16:53.022090   30338 main.go:141] libmachine: (ha-025067)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/boot2docker.iso'/>
	I0422 17:16:53.022101   30338 main.go:141] libmachine: (ha-025067)       <target dev='hdc' bus='scsi'/>
	I0422 17:16:53.022110   30338 main.go:141] libmachine: (ha-025067)       <readonly/>
	I0422 17:16:53.022120   30338 main.go:141] libmachine: (ha-025067)     </disk>
	I0422 17:16:53.022129   30338 main.go:141] libmachine: (ha-025067)     <disk type='file' device='disk'>
	I0422 17:16:53.022135   30338 main.go:141] libmachine: (ha-025067)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 17:16:53.022146   30338 main.go:141] libmachine: (ha-025067)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/ha-025067.rawdisk'/>
	I0422 17:16:53.022152   30338 main.go:141] libmachine: (ha-025067)       <target dev='hda' bus='virtio'/>
	I0422 17:16:53.022157   30338 main.go:141] libmachine: (ha-025067)     </disk>
	I0422 17:16:53.022164   30338 main.go:141] libmachine: (ha-025067)     <interface type='network'>
	I0422 17:16:53.022172   30338 main.go:141] libmachine: (ha-025067)       <source network='mk-ha-025067'/>
	I0422 17:16:53.022182   30338 main.go:141] libmachine: (ha-025067)       <model type='virtio'/>
	I0422 17:16:53.022189   30338 main.go:141] libmachine: (ha-025067)     </interface>
	I0422 17:16:53.022205   30338 main.go:141] libmachine: (ha-025067)     <interface type='network'>
	I0422 17:16:53.022232   30338 main.go:141] libmachine: (ha-025067)       <source network='default'/>
	I0422 17:16:53.022259   30338 main.go:141] libmachine: (ha-025067)       <model type='virtio'/>
	I0422 17:16:53.022286   30338 main.go:141] libmachine: (ha-025067)     </interface>
	I0422 17:16:53.022302   30338 main.go:141] libmachine: (ha-025067)     <serial type='pty'>
	I0422 17:16:53.022316   30338 main.go:141] libmachine: (ha-025067)       <target port='0'/>
	I0422 17:16:53.022327   30338 main.go:141] libmachine: (ha-025067)     </serial>
	I0422 17:16:53.022338   30338 main.go:141] libmachine: (ha-025067)     <console type='pty'>
	I0422 17:16:53.022355   30338 main.go:141] libmachine: (ha-025067)       <target type='serial' port='0'/>
	I0422 17:16:53.022373   30338 main.go:141] libmachine: (ha-025067)     </console>
	I0422 17:16:53.022384   30338 main.go:141] libmachine: (ha-025067)     <rng model='virtio'>
	I0422 17:16:53.022394   30338 main.go:141] libmachine: (ha-025067)       <backend model='random'>/dev/random</backend>
	I0422 17:16:53.022404   30338 main.go:141] libmachine: (ha-025067)     </rng>
	I0422 17:16:53.022412   30338 main.go:141] libmachine: (ha-025067)     
	I0422 17:16:53.022426   30338 main.go:141] libmachine: (ha-025067)     
	I0422 17:16:53.022439   30338 main.go:141] libmachine: (ha-025067)   </devices>
	I0422 17:16:53.022449   30338 main.go:141] libmachine: (ha-025067) </domain>
	I0422 17:16:53.022460   30338 main.go:141] libmachine: (ha-025067) 
	I0422 17:16:53.026948   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:a5:56:10 in network default
	I0422 17:16:53.027518   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:53.027556   30338 main.go:141] libmachine: (ha-025067) Ensuring networks are active...
	I0422 17:16:53.028145   30338 main.go:141] libmachine: (ha-025067) Ensuring network default is active
	I0422 17:16:53.028505   30338 main.go:141] libmachine: (ha-025067) Ensuring network mk-ha-025067 is active
	I0422 17:16:53.028967   30338 main.go:141] libmachine: (ha-025067) Getting domain xml...
	I0422 17:16:53.029600   30338 main.go:141] libmachine: (ha-025067) Creating domain...
	I0422 17:16:54.194304   30338 main.go:141] libmachine: (ha-025067) Waiting to get IP...
	I0422 17:16:54.195315   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:54.195793   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:54.195821   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:54.195765   30361 retry.go:31] will retry after 207.971302ms: waiting for machine to come up
	I0422 17:16:54.405368   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:54.405849   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:54.405881   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:54.405803   30361 retry.go:31] will retry after 339.912064ms: waiting for machine to come up
	I0422 17:16:54.747484   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:54.747869   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:54.747901   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:54.747825   30361 retry.go:31] will retry after 306.603999ms: waiting for machine to come up
	I0422 17:16:55.056260   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:55.056704   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:55.056735   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:55.056654   30361 retry.go:31] will retry after 408.670158ms: waiting for machine to come up
	I0422 17:16:55.467196   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:55.467604   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:55.467629   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:55.467564   30361 retry.go:31] will retry after 638.292083ms: waiting for machine to come up
	I0422 17:16:56.107331   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:56.107755   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:56.107794   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:56.107719   30361 retry.go:31] will retry after 790.345835ms: waiting for machine to come up
	I0422 17:16:56.899646   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:56.900019   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:56.900054   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:56.899992   30361 retry.go:31] will retry after 896.720809ms: waiting for machine to come up
	I0422 17:16:57.798561   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:57.798968   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:57.799012   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:57.798920   30361 retry.go:31] will retry after 1.465416505s: waiting for machine to come up
	I0422 17:16:59.266468   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:16:59.266813   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:16:59.266866   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:16:59.266784   30361 retry.go:31] will retry after 1.392901232s: waiting for machine to come up
	I0422 17:17:00.661353   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:00.661718   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:00.661741   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:00.661663   30361 retry.go:31] will retry after 2.128283213s: waiting for machine to come up
	I0422 17:17:02.791467   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:02.791788   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:02.791814   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:02.791745   30361 retry.go:31] will retry after 1.856350174s: waiting for machine to come up
	I0422 17:17:04.649259   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:04.649742   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:04.649782   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:04.649687   30361 retry.go:31] will retry after 2.216077949s: waiting for machine to come up
	I0422 17:17:06.869019   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:06.869529   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:06.869553   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:06.869465   30361 retry.go:31] will retry after 3.742529286s: waiting for machine to come up
	I0422 17:17:10.615809   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:10.616365   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find current IP address of domain ha-025067 in network mk-ha-025067
	I0422 17:17:10.616394   30338 main.go:141] libmachine: (ha-025067) DBG | I0422 17:17:10.616315   30361 retry.go:31] will retry after 4.954168816s: waiting for machine to come up
	I0422 17:17:15.574406   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.574869   30338 main.go:141] libmachine: (ha-025067) Found IP for machine: 192.168.39.22
	I0422 17:17:15.574892   30338 main.go:141] libmachine: (ha-025067) Reserving static IP address...
	I0422 17:17:15.574904   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has current primary IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.575257   30338 main.go:141] libmachine: (ha-025067) DBG | unable to find host DHCP lease matching {name: "ha-025067", mac: "52:54:00:8b:2a:21", ip: "192.168.39.22"} in network mk-ha-025067
	I0422 17:17:15.646280   30338 main.go:141] libmachine: (ha-025067) DBG | Getting to WaitForSSH function...
	I0422 17:17:15.646318   30338 main.go:141] libmachine: (ha-025067) Reserved static IP address: 192.168.39.22
	I0422 17:17:15.646330   30338 main.go:141] libmachine: (ha-025067) Waiting for SSH to be available...
	I0422 17:17:15.648969   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.649518   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:15.649544   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.649677   30338 main.go:141] libmachine: (ha-025067) DBG | Using SSH client type: external
	I0422 17:17:15.650157   30338 main.go:141] libmachine: (ha-025067) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa (-rw-------)
	I0422 17:17:15.650204   30338 main.go:141] libmachine: (ha-025067) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.22 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 17:17:15.650230   30338 main.go:141] libmachine: (ha-025067) DBG | About to run SSH command:
	I0422 17:17:15.650245   30338 main.go:141] libmachine: (ha-025067) DBG | exit 0
	I0422 17:17:15.779021   30338 main.go:141] libmachine: (ha-025067) DBG | SSH cmd err, output: <nil>: 
	I0422 17:17:15.779271   30338 main.go:141] libmachine: (ha-025067) KVM machine creation complete!
	I0422 17:17:15.779583   30338 main.go:141] libmachine: (ha-025067) Calling .GetConfigRaw
	I0422 17:17:15.780108   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:15.780287   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:15.780417   30338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 17:17:15.780429   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:15.781557   30338 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 17:17:15.781572   30338 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 17:17:15.781579   30338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 17:17:15.781586   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:15.783633   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.784006   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:15.784032   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.784135   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:15.784322   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.784453   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.784555   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:15.784718   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:15.784950   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:15.784962   30338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 17:17:15.894473   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:17:15.894500   30338 main.go:141] libmachine: Detecting the provisioner...
	I0422 17:17:15.894511   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:15.897294   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.897667   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:15.897692   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:15.897818   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:15.898015   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.898147   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:15.898290   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:15.898464   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:15.898654   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:15.898674   30338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 17:17:16.008050   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 17:17:16.008142   30338 main.go:141] libmachine: found compatible host: buildroot
	I0422 17:17:16.008157   30338 main.go:141] libmachine: Provisioning with buildroot...
	I0422 17:17:16.008170   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:17:16.008415   30338 buildroot.go:166] provisioning hostname "ha-025067"
	I0422 17:17:16.008437   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:17:16.008633   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.011139   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.011500   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.011520   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.011691   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.011859   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.012004   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.012132   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.012300   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.012496   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.012509   30338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067 && echo "ha-025067" | sudo tee /etc/hostname
	I0422 17:17:16.134495   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067
	
	I0422 17:17:16.134535   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.137003   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.137308   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.137334   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.137493   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.137714   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.137909   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.138028   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.138205   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.138354   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.138369   30338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:17:16.256974   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:17:16.257003   30338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:17:16.257071   30338 buildroot.go:174] setting up certificates
	I0422 17:17:16.257083   30338 provision.go:84] configureAuth start
	I0422 17:17:16.257097   30338 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:17:16.257367   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:16.259679   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.260084   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.260120   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.260243   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.262444   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.262813   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.262835   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.262965   30338 provision.go:143] copyHostCerts
	I0422 17:17:16.263004   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:17:16.263106   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:17:16.263167   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:17:16.263251   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:17:16.263340   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:17:16.263359   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:17:16.263366   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:17:16.263391   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:17:16.263441   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:17:16.263459   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:17:16.263466   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:17:16.263491   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:17:16.263579   30338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067 san=[127.0.0.1 192.168.39.22 ha-025067 localhost minikube]
	I0422 17:17:16.351025   30338 provision.go:177] copyRemoteCerts
	I0422 17:17:16.351085   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:17:16.351106   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.353536   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.353827   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.353862   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.354018   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.354199   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.354331   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.354470   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:16.442349   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:17:16.442413   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:17:16.467844   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:17:16.467923   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0422 17:17:16.493373   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:17:16.493431   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 17:17:16.518912   30338 provision.go:87] duration metric: took 261.814442ms to configureAuth
	I0422 17:17:16.518945   30338 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:17:16.519215   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:17:16.519352   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.522066   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.522405   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.522432   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.522596   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.522786   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.522973   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.523098   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.523251   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.523438   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.523469   30338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:17:16.797209   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:17:16.797236   30338 main.go:141] libmachine: Checking connection to Docker...
	I0422 17:17:16.797244   30338 main.go:141] libmachine: (ha-025067) Calling .GetURL
	I0422 17:17:16.798626   30338 main.go:141] libmachine: (ha-025067) DBG | Using libvirt version 6000000
	I0422 17:17:16.801200   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.801514   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.801546   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.801723   30338 main.go:141] libmachine: Docker is up and running!
	I0422 17:17:16.801735   30338 main.go:141] libmachine: Reticulating splines...
	I0422 17:17:16.801741   30338 client.go:171] duration metric: took 24.169074993s to LocalClient.Create
	I0422 17:17:16.801764   30338 start.go:167] duration metric: took 24.169140026s to libmachine.API.Create "ha-025067"
	I0422 17:17:16.801772   30338 start.go:293] postStartSetup for "ha-025067" (driver="kvm2")
	I0422 17:17:16.801785   30338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:17:16.801799   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:16.802012   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:17:16.802030   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.804046   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.804307   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.804334   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.804441   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.804627   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.804757   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.804888   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:16.890248   30338 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:17:16.894951   30338 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:17:16.894976   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:17:16.895050   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:17:16.895264   30338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:17:16.895285   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:17:16.895403   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:17:16.905715   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:17:16.931370   30338 start.go:296] duration metric: took 129.583412ms for postStartSetup
	I0422 17:17:16.931429   30338 main.go:141] libmachine: (ha-025067) Calling .GetConfigRaw
	I0422 17:17:16.931987   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:16.934618   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.934947   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.934983   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.935214   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:17:16.935403   30338 start.go:128] duration metric: took 24.321886362s to createHost
	I0422 17:17:16.935427   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:16.937763   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.938043   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:16.938070   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:16.938181   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:16.938369   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.938536   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:16.938712   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:16.938866   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:17:16.939028   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:17:16.939042   30338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:17:17.048077   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806237.020189542
	
	I0422 17:17:17.048098   30338 fix.go:216] guest clock: 1713806237.020189542
	I0422 17:17:17.048105   30338 fix.go:229] Guest: 2024-04-22 17:17:17.020189542 +0000 UTC Remote: 2024-04-22 17:17:16.93541497 +0000 UTC m=+24.442682821 (delta=84.774572ms)
	I0422 17:17:17.048135   30338 fix.go:200] guest clock delta is within tolerance: 84.774572ms
	I0422 17:17:17.048145   30338 start.go:83] releasing machines lock for "ha-025067", held for 24.434699931s
	I0422 17:17:17.048165   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.048466   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:17.050647   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.051016   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:17.051044   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.051177   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.051722   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.051884   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:17.051970   30338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:17:17.052008   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:17.052079   30338 ssh_runner.go:195] Run: cat /version.json
	I0422 17:17:17.052105   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:17.054561   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.054870   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:17.054894   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.054916   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.055071   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:17.055251   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:17.055338   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:17.055372   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:17.055429   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:17.055523   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:17.055586   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:17.055657   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:17.055807   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:17.055945   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:17.172058   30338 ssh_runner.go:195] Run: systemctl --version
	I0422 17:17:17.178255   30338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:17:17.351734   30338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:17:17.357761   30338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:17:17.357826   30338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:17:17.374861   30338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 17:17:17.374884   30338 start.go:494] detecting cgroup driver to use...
	I0422 17:17:17.374944   30338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:17:17.391764   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:17:17.406363   30338 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:17:17.406446   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:17:17.420829   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:17:17.434640   30338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:17:17.560577   30338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:17:17.720575   30338 docker.go:233] disabling docker service ...
	I0422 17:17:17.720640   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:17:17.736345   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:17:17.749254   30338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:17:17.888250   30338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:17:17.998872   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:17:18.013867   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:17:18.033660   30338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:17:18.033729   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.045858   30338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:17:18.045930   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.057762   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.070225   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.082031   30338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:17:18.093847   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.105155   30338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.123633   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:17:18.135530   30338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:17:18.146350   30338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 17:17:18.146416   30338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 17:17:18.160607   30338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:17:18.171248   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:17:18.285103   30338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:17:18.427364   30338 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:17:18.427430   30338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:17:18.432215   30338 start.go:562] Will wait 60s for crictl version
	I0422 17:17:18.432261   30338 ssh_runner.go:195] Run: which crictl
	I0422 17:17:18.436087   30338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:17:18.474342   30338 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:17:18.474427   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:17:18.502366   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:17:18.533577   30338 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:17:18.535243   30338 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:17:18.537807   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:18.538177   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:18.538206   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:18.538458   30338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:17:18.542748   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:17:18.556805   30338 kubeadm.go:877] updating cluster {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 17:17:18.556922   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:17:18.556963   30338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:17:18.590355   30338 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 17:17:18.590416   30338 ssh_runner.go:195] Run: which lz4
	I0422 17:17:18.594419   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0422 17:17:18.594510   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 17:17:18.598660   30338 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 17:17:18.598686   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 17:17:20.067394   30338 crio.go:462] duration metric: took 1.472907327s to copy over tarball
	I0422 17:17:20.067470   30338 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 17:17:22.341420   30338 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273918539s)
	I0422 17:17:22.341462   30338 crio.go:469] duration metric: took 2.274021881s to extract the tarball
	I0422 17:17:22.341473   30338 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 17:17:22.380285   30338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:17:22.430394   30338 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:17:22.430417   30338 cache_images.go:84] Images are preloaded, skipping loading
	I0422 17:17:22.430423   30338 kubeadm.go:928] updating node { 192.168.39.22 8443 v1.30.0 crio true true} ...
	I0422 17:17:22.430517   30338 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:17:22.430577   30338 ssh_runner.go:195] Run: crio config
	I0422 17:17:22.482051   30338 cni.go:84] Creating CNI manager for ""
	I0422 17:17:22.482073   30338 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 17:17:22.482085   30338 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 17:17:22.482104   30338 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-025067 NodeName:ha-025067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 17:17:22.482226   30338 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-025067"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 17:17:22.482252   30338 kube-vip.go:111] generating kube-vip config ...
	I0422 17:17:22.482289   30338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:17:22.499572   30338 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:17:22.499685   30338 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:17:22.499788   30338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:17:22.511530   30338 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 17:17:22.511598   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 17:17:22.522317   30338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0422 17:17:22.539777   30338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:17:22.558702   30338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0422 17:17:22.578843   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0422 17:17:22.598371   30338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:17:22.602486   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:17:22.617761   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:17:22.732562   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:17:22.750808   30338 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.22
	I0422 17:17:22.750831   30338 certs.go:194] generating shared ca certs ...
	I0422 17:17:22.750850   30338 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:22.751000   30338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:17:22.751050   30338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:17:22.751062   30338 certs.go:256] generating profile certs ...
	I0422 17:17:22.751114   30338 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:17:22.751146   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt with IP's: []
	I0422 17:17:22.915108   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt ...
	I0422 17:17:22.915152   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt: {Name:mk430bbc2ed98d56b9d3bf935e45898d0ff4a313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:22.915336   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key ...
	I0422 17:17:22.915357   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key: {Name:mkfbfc636d8b8074e5a1767eaca4ba73158825b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:22.915457   30338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a
	I0422 17:17:22.915476   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.254]
	I0422 17:17:23.036108   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a ...
	I0422 17:17:23.036141   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a: {Name:mk10f3464e3fe632e615efa17cc1af5344bd012e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.036318   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a ...
	I0422 17:17:23.036342   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a: {Name:mk537b75872841f8afa81021b50c851254a7f89e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.036439   30338 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.42343a7a -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:17:23.036527   30338 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.42343a7a -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:17:23.036605   30338 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:17:23.036625   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt with IP's: []
	I0422 17:17:23.187088   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt ...
	I0422 17:17:23.187136   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt: {Name:mkda5bf715a5cd070a437870bf07f33adca40e4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.187314   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key ...
	I0422 17:17:23.187328   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key: {Name:mkc14c2c1ca9036d53c70ad1a0a708516fe753a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:23.187425   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:17:23.187446   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:17:23.187462   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:17:23.187477   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:17:23.187495   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:17:23.187514   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:17:23.187532   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:17:23.187558   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:17:23.187618   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:17:23.187663   30338 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:17:23.187677   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:17:23.187710   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:17:23.187740   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:17:23.187776   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:17:23.187831   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:17:23.187879   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.187900   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.187918   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.188465   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:17:23.216928   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:17:23.244740   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:17:23.270485   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:17:23.296668   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 17:17:23.323400   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 17:17:23.350450   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:17:23.378433   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:17:23.405755   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:17:23.432548   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:17:23.459972   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:17:23.485881   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 17:17:23.505928   30338 ssh_runner.go:195] Run: openssl version
	I0422 17:17:23.521102   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:17:23.542466   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.547714   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.547764   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:17:23.554814   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:17:23.570174   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:17:23.581300   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.586029   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.586073   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:17:23.592061   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:17:23.603046   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:17:23.618173   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.624758   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.624818   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:17:23.630925   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:17:23.642412   30338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:17:23.646609   30338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 17:17:23.646664   30338 kubeadm.go:391] StartCluster: {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:17:23.646754   30338 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 17:17:23.646803   30338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 17:17:23.685745   30338 cri.go:89] found id: ""
	I0422 17:17:23.685824   30338 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 17:17:23.696735   30338 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 17:17:23.707358   30338 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 17:17:23.717929   30338 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 17:17:23.717954   30338 kubeadm.go:156] found existing configuration files:
	
	I0422 17:17:23.718003   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 17:17:23.727799   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 17:17:23.727861   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 17:17:23.738774   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 17:17:23.748975   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 17:17:23.749033   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 17:17:23.759471   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 17:17:23.769183   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 17:17:23.769249   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 17:17:23.779355   30338 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 17:17:23.789246   30338 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 17:17:23.789307   30338 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 17:17:23.800177   30338 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 17:17:23.900715   30338 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 17:17:23.900777   30338 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 17:17:24.027754   30338 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 17:17:24.027894   30338 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 17:17:24.028035   30338 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 17:17:24.245420   30338 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 17:17:24.378712   30338 out.go:204]   - Generating certificates and keys ...
	I0422 17:17:24.378833   30338 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 17:17:24.378947   30338 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 17:17:24.542801   30338 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 17:17:24.724926   30338 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 17:17:24.824807   30338 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 17:17:25.104482   30338 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 17:17:25.235796   30338 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 17:17:25.235913   30338 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-025067 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0422 17:17:25.360581   30338 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 17:17:25.360791   30338 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-025067 localhost] and IPs [192.168.39.22 127.0.0.1 ::1]
	I0422 17:17:25.486631   30338 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 17:17:25.585077   30338 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 17:17:25.929873   30338 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 17:17:25.930367   30338 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 17:17:26.328948   30338 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 17:17:26.449725   30338 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 17:17:26.539056   30338 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 17:17:26.722381   30338 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 17:17:26.836166   30338 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 17:17:26.836622   30338 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 17:17:26.841711   30338 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 17:17:26.843751   30338 out.go:204]   - Booting up control plane ...
	I0422 17:17:26.843849   30338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 17:17:26.843953   30338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 17:17:26.844039   30338 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 17:17:26.859962   30338 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 17:17:26.860846   30338 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 17:17:26.860919   30338 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 17:17:26.990901   30338 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 17:17:26.991009   30338 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 17:17:27.491088   30338 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.954848ms
	I0422 17:17:27.491224   30338 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 17:17:33.469170   30338 kubeadm.go:309] [api-check] The API server is healthy after 5.982033292s
	I0422 17:17:33.481806   30338 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 17:17:33.506950   30338 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 17:17:34.038663   30338 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 17:17:34.038823   30338 kubeadm.go:309] [mark-control-plane] Marking the node ha-025067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 17:17:34.060763   30338 kubeadm.go:309] [bootstrap-token] Using token: 7rz1nt.dzwgo1uwph8u4dan
	I0422 17:17:34.062632   30338 out.go:204]   - Configuring RBAC rules ...
	I0422 17:17:34.062795   30338 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 17:17:34.074146   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 17:17:34.081674   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 17:17:34.085022   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 17:17:34.088276   30338 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 17:17:34.091886   30338 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 17:17:34.106861   30338 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 17:17:34.359105   30338 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 17:17:34.875599   30338 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 17:17:34.876676   30338 kubeadm.go:309] 
	I0422 17:17:34.876754   30338 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 17:17:34.876780   30338 kubeadm.go:309] 
	I0422 17:17:34.876868   30338 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 17:17:34.876880   30338 kubeadm.go:309] 
	I0422 17:17:34.876956   30338 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 17:17:34.877039   30338 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 17:17:34.877115   30338 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 17:17:34.877134   30338 kubeadm.go:309] 
	I0422 17:17:34.877205   30338 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 17:17:34.877216   30338 kubeadm.go:309] 
	I0422 17:17:34.877277   30338 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 17:17:34.877284   30338 kubeadm.go:309] 
	I0422 17:17:34.877323   30338 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 17:17:34.877382   30338 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 17:17:34.877457   30338 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 17:17:34.877465   30338 kubeadm.go:309] 
	I0422 17:17:34.877542   30338 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 17:17:34.877644   30338 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 17:17:34.877654   30338 kubeadm.go:309] 
	I0422 17:17:34.877721   30338 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7rz1nt.dzwgo1uwph8u4dan \
	I0422 17:17:34.877858   30338 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 17:17:34.877880   30338 kubeadm.go:309] 	--control-plane 
	I0422 17:17:34.877884   30338 kubeadm.go:309] 
	I0422 17:17:34.878024   30338 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 17:17:34.878044   30338 kubeadm.go:309] 
	I0422 17:17:34.878149   30338 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7rz1nt.dzwgo1uwph8u4dan \
	I0422 17:17:34.878261   30338 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 17:17:34.878930   30338 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 17:17:34.878959   30338 cni.go:84] Creating CNI manager for ""
	I0422 17:17:34.878966   30338 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0422 17:17:34.880738   30338 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0422 17:17:34.881879   30338 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0422 17:17:34.887775   30338 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0422 17:17:34.887794   30338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0422 17:17:34.913083   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0422 17:17:35.280059   30338 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 17:17:35.280163   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:35.280176   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-025067 minikube.k8s.io/updated_at=2024_04_22T17_17_35_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=ha-025067 minikube.k8s.io/primary=true
	I0422 17:17:35.404189   30338 ops.go:34] apiserver oom_adj: -16
	I0422 17:17:35.421996   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:35.922431   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:36.422389   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:36.922341   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:37.422868   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:37.922959   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:38.422871   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:38.922933   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:39.422984   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:39.922589   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:40.423080   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:40.922709   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:41.423065   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:41.922294   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:42.422583   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:42.922922   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:43.422932   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:43.922094   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:44.422125   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:44.922968   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:45.422671   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:45.922862   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:46.422038   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:46.922225   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:47.422202   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:47.922994   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 17:17:48.024774   30338 kubeadm.go:1107] duration metric: took 12.744673877s to wait for elevateKubeSystemPrivileges
	W0422 17:17:48.024809   30338 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 17:17:48.024820   30338 kubeadm.go:393] duration metric: took 24.378158938s to StartCluster
	I0422 17:17:48.024837   30338 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:48.024911   30338 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:17:48.025566   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:17:48.025776   30338 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:17:48.025790   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 17:17:48.025804   30338 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 17:17:48.025852   30338 addons.go:69] Setting storage-provisioner=true in profile "ha-025067"
	I0422 17:17:48.025883   30338 addons.go:234] Setting addon storage-provisioner=true in "ha-025067"
	I0422 17:17:48.025901   30338 addons.go:69] Setting default-storageclass=true in profile "ha-025067"
	I0422 17:17:48.025918   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:17:48.025927   30338 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-025067"
	I0422 17:17:48.025798   30338 start.go:240] waiting for startup goroutines ...
	I0422 17:17:48.026026   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:17:48.026312   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.026312   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.026368   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.026340   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.041188   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39753
	I0422 17:17:48.041188   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0422 17:17:48.041652   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.041804   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.042213   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.042240   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.042323   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.042345   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.042568   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.042644   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.042826   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:48.043097   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.043118   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.045097   30338 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:17:48.045434   30338 kapi.go:59] client config for ha-025067: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 17:17:48.045942   30338 cert_rotation.go:137] Starting client certificate rotation controller
	I0422 17:17:48.046189   30338 addons.go:234] Setting addon default-storageclass=true in "ha-025067"
	I0422 17:17:48.046226   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:17:48.046491   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.046525   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.058471   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
	I0422 17:17:48.058926   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.059453   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.059473   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.059823   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.060049   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:48.060592   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I0422 17:17:48.060934   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.061449   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.061469   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.061829   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.061886   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:48.062291   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:48.062312   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:48.064567   30338 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 17:17:48.066096   30338 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 17:17:48.066111   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 17:17:48.066131   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:48.069330   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.069810   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:48.069837   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.069995   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:48.070200   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:48.070365   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:48.070527   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:48.077528   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37009
	I0422 17:17:48.077920   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:48.078416   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:48.078445   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:48.078788   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:48.078989   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:17:48.080708   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:17:48.080950   30338 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 17:17:48.080969   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 17:17:48.080992   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:17:48.083420   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.083908   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:17:48.083938   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:17:48.084085   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:17:48.084254   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:17:48.084394   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:17:48.084538   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:17:48.198341   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 17:17:48.236215   30338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 17:17:48.260479   30338 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 17:17:48.808081   30338 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0422 17:17:48.808173   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:48.808191   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:48.808571   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:48.808590   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:48.808616   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:48.808628   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:48.808864   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:48.808880   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:48.808911   30338 main.go:141] libmachine: (ha-025067) DBG | Closing plugin on server side
	I0422 17:17:48.808996   30338 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0422 17:17:48.809009   30338 round_trippers.go:469] Request Headers:
	I0422 17:17:48.809019   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:17:48.809023   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:17:48.819866   30338 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0422 17:17:48.820646   30338 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0422 17:17:48.820666   30338 round_trippers.go:469] Request Headers:
	I0422 17:17:48.820677   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:17:48.820682   30338 round_trippers.go:473]     Content-Type: application/json
	I0422 17:17:48.820687   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:17:48.823392   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:17:48.823590   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:48.823607   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:48.823950   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:48.823965   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:49.039496   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:49.039525   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:49.039800   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:49.039839   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:49.039878   30338 main.go:141] libmachine: Making call to close driver server
	I0422 17:17:49.039901   30338 main.go:141] libmachine: (ha-025067) Calling .Close
	I0422 17:17:49.040187   30338 main.go:141] libmachine: Successfully made call to close driver server
	I0422 17:17:49.040205   30338 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 17:17:49.040230   30338 main.go:141] libmachine: (ha-025067) DBG | Closing plugin on server side
	I0422 17:17:49.042419   30338 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0422 17:17:49.043697   30338 addons.go:505] duration metric: took 1.017890194s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0422 17:17:49.043728   30338 start.go:245] waiting for cluster config update ...
	I0422 17:17:49.043739   30338 start.go:254] writing updated cluster config ...
	I0422 17:17:49.045267   30338 out.go:177] 
	I0422 17:17:49.046638   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:17:49.046697   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:17:49.048449   30338 out.go:177] * Starting "ha-025067-m02" control-plane node in "ha-025067" cluster
	I0422 17:17:49.049972   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:17:49.049997   30338 cache.go:56] Caching tarball of preloaded images
	I0422 17:17:49.050097   30338 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:17:49.050112   30338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:17:49.050178   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:17:49.050551   30338 start.go:360] acquireMachinesLock for ha-025067-m02: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:17:49.050595   30338 start.go:364] duration metric: took 24.853µs to acquireMachinesLock for "ha-025067-m02"
	I0422 17:17:49.050608   30338 start.go:93] Provisioning new machine with config: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:17:49.050679   30338 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0422 17:17:49.052323   30338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 17:17:49.052399   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:17:49.052422   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:17:49.067500   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36009
	I0422 17:17:49.068031   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:17:49.068556   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:17:49.068577   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:17:49.068921   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:17:49.069139   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:17:49.069343   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:17:49.069529   30338 start.go:159] libmachine.API.Create for "ha-025067" (driver="kvm2")
	I0422 17:17:49.069555   30338 client.go:168] LocalClient.Create starting
	I0422 17:17:49.069594   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 17:17:49.069637   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:17:49.069675   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:17:49.069745   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 17:17:49.069775   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:17:49.069792   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:17:49.069819   30338 main.go:141] libmachine: Running pre-create checks...
	I0422 17:17:49.069832   30338 main.go:141] libmachine: (ha-025067-m02) Calling .PreCreateCheck
	I0422 17:17:49.069994   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetConfigRaw
	I0422 17:17:49.070438   30338 main.go:141] libmachine: Creating machine...
	I0422 17:17:49.070458   30338 main.go:141] libmachine: (ha-025067-m02) Calling .Create
	I0422 17:17:49.070582   30338 main.go:141] libmachine: (ha-025067-m02) Creating KVM machine...
	I0422 17:17:49.071942   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found existing default KVM network
	I0422 17:17:49.072032   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found existing private KVM network mk-ha-025067
	I0422 17:17:49.072217   30338 main.go:141] libmachine: (ha-025067-m02) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02 ...
	I0422 17:17:49.072242   30338 main.go:141] libmachine: (ha-025067-m02) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 17:17:49.072289   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.072190   31195 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:17:49.072371   30338 main.go:141] libmachine: (ha-025067-m02) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 17:17:49.285347   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.285226   31195 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa...
	I0422 17:17:49.423872   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.423692   31195 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/ha-025067-m02.rawdisk...
	I0422 17:17:49.423922   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Writing magic tar header
	I0422 17:17:49.423940   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Writing SSH key tar header
	I0422 17:17:49.423952   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:49.423844   31195 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02 ...
	I0422 17:17:49.423968   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02
	I0422 17:17:49.424016   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02 (perms=drwx------)
	I0422 17:17:49.424049   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 17:17:49.424075   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 17:17:49.424085   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 17:17:49.424095   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 17:17:49.424103   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 17:17:49.424112   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:17:49.424121   30338 main.go:141] libmachine: (ha-025067-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 17:17:49.424131   30338 main.go:141] libmachine: (ha-025067-m02) Creating domain...
	I0422 17:17:49.424142   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 17:17:49.424151   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 17:17:49.424159   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home/jenkins
	I0422 17:17:49.424167   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Checking permissions on dir: /home
	I0422 17:17:49.424173   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Skipping /home - not owner
	I0422 17:17:49.425159   30338 main.go:141] libmachine: (ha-025067-m02) define libvirt domain using xml: 
	I0422 17:17:49.425179   30338 main.go:141] libmachine: (ha-025067-m02) <domain type='kvm'>
	I0422 17:17:49.425188   30338 main.go:141] libmachine: (ha-025067-m02)   <name>ha-025067-m02</name>
	I0422 17:17:49.425195   30338 main.go:141] libmachine: (ha-025067-m02)   <memory unit='MiB'>2200</memory>
	I0422 17:17:49.425203   30338 main.go:141] libmachine: (ha-025067-m02)   <vcpu>2</vcpu>
	I0422 17:17:49.425209   30338 main.go:141] libmachine: (ha-025067-m02)   <features>
	I0422 17:17:49.425216   30338 main.go:141] libmachine: (ha-025067-m02)     <acpi/>
	I0422 17:17:49.425233   30338 main.go:141] libmachine: (ha-025067-m02)     <apic/>
	I0422 17:17:49.425246   30338 main.go:141] libmachine: (ha-025067-m02)     <pae/>
	I0422 17:17:49.425258   30338 main.go:141] libmachine: (ha-025067-m02)     
	I0422 17:17:49.425293   30338 main.go:141] libmachine: (ha-025067-m02)   </features>
	I0422 17:17:49.425328   30338 main.go:141] libmachine: (ha-025067-m02)   <cpu mode='host-passthrough'>
	I0422 17:17:49.425364   30338 main.go:141] libmachine: (ha-025067-m02)   
	I0422 17:17:49.425394   30338 main.go:141] libmachine: (ha-025067-m02)   </cpu>
	I0422 17:17:49.425402   30338 main.go:141] libmachine: (ha-025067-m02)   <os>
	I0422 17:17:49.425411   30338 main.go:141] libmachine: (ha-025067-m02)     <type>hvm</type>
	I0422 17:17:49.425420   30338 main.go:141] libmachine: (ha-025067-m02)     <boot dev='cdrom'/>
	I0422 17:17:49.425429   30338 main.go:141] libmachine: (ha-025067-m02)     <boot dev='hd'/>
	I0422 17:17:49.425436   30338 main.go:141] libmachine: (ha-025067-m02)     <bootmenu enable='no'/>
	I0422 17:17:49.425450   30338 main.go:141] libmachine: (ha-025067-m02)   </os>
	I0422 17:17:49.425469   30338 main.go:141] libmachine: (ha-025067-m02)   <devices>
	I0422 17:17:49.425488   30338 main.go:141] libmachine: (ha-025067-m02)     <disk type='file' device='cdrom'>
	I0422 17:17:49.425505   30338 main.go:141] libmachine: (ha-025067-m02)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/boot2docker.iso'/>
	I0422 17:17:49.425516   30338 main.go:141] libmachine: (ha-025067-m02)       <target dev='hdc' bus='scsi'/>
	I0422 17:17:49.425528   30338 main.go:141] libmachine: (ha-025067-m02)       <readonly/>
	I0422 17:17:49.425537   30338 main.go:141] libmachine: (ha-025067-m02)     </disk>
	I0422 17:17:49.425549   30338 main.go:141] libmachine: (ha-025067-m02)     <disk type='file' device='disk'>
	I0422 17:17:49.425562   30338 main.go:141] libmachine: (ha-025067-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 17:17:49.425583   30338 main.go:141] libmachine: (ha-025067-m02)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/ha-025067-m02.rawdisk'/>
	I0422 17:17:49.425599   30338 main.go:141] libmachine: (ha-025067-m02)       <target dev='hda' bus='virtio'/>
	I0422 17:17:49.425610   30338 main.go:141] libmachine: (ha-025067-m02)     </disk>
	I0422 17:17:49.425622   30338 main.go:141] libmachine: (ha-025067-m02)     <interface type='network'>
	I0422 17:17:49.425636   30338 main.go:141] libmachine: (ha-025067-m02)       <source network='mk-ha-025067'/>
	I0422 17:17:49.425652   30338 main.go:141] libmachine: (ha-025067-m02)       <model type='virtio'/>
	I0422 17:17:49.425664   30338 main.go:141] libmachine: (ha-025067-m02)     </interface>
	I0422 17:17:49.425676   30338 main.go:141] libmachine: (ha-025067-m02)     <interface type='network'>
	I0422 17:17:49.425689   30338 main.go:141] libmachine: (ha-025067-m02)       <source network='default'/>
	I0422 17:17:49.425697   30338 main.go:141] libmachine: (ha-025067-m02)       <model type='virtio'/>
	I0422 17:17:49.425710   30338 main.go:141] libmachine: (ha-025067-m02)     </interface>
	I0422 17:17:49.425723   30338 main.go:141] libmachine: (ha-025067-m02)     <serial type='pty'>
	I0422 17:17:49.425742   30338 main.go:141] libmachine: (ha-025067-m02)       <target port='0'/>
	I0422 17:17:49.425752   30338 main.go:141] libmachine: (ha-025067-m02)     </serial>
	I0422 17:17:49.425831   30338 main.go:141] libmachine: (ha-025067-m02)     <console type='pty'>
	I0422 17:17:49.425853   30338 main.go:141] libmachine: (ha-025067-m02)       <target type='serial' port='0'/>
	I0422 17:17:49.425862   30338 main.go:141] libmachine: (ha-025067-m02)     </console>
	I0422 17:17:49.425873   30338 main.go:141] libmachine: (ha-025067-m02)     <rng model='virtio'>
	I0422 17:17:49.425897   30338 main.go:141] libmachine: (ha-025067-m02)       <backend model='random'>/dev/random</backend>
	I0422 17:17:49.425921   30338 main.go:141] libmachine: (ha-025067-m02)     </rng>
	I0422 17:17:49.425954   30338 main.go:141] libmachine: (ha-025067-m02)     
	I0422 17:17:49.425965   30338 main.go:141] libmachine: (ha-025067-m02)     
	I0422 17:17:49.425975   30338 main.go:141] libmachine: (ha-025067-m02)   </devices>
	I0422 17:17:49.425984   30338 main.go:141] libmachine: (ha-025067-m02) </domain>
	I0422 17:17:49.426001   30338 main.go:141] libmachine: (ha-025067-m02) 
	I0422 17:17:49.432608   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:b0:45:d4 in network default
	I0422 17:17:49.433146   30338 main.go:141] libmachine: (ha-025067-m02) Ensuring networks are active...
	I0422 17:17:49.433206   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:49.433821   30338 main.go:141] libmachine: (ha-025067-m02) Ensuring network default is active
	I0422 17:17:49.434145   30338 main.go:141] libmachine: (ha-025067-m02) Ensuring network mk-ha-025067 is active
	I0422 17:17:49.434613   30338 main.go:141] libmachine: (ha-025067-m02) Getting domain xml...
	I0422 17:17:49.435370   30338 main.go:141] libmachine: (ha-025067-m02) Creating domain...
	I0422 17:17:50.666188   30338 main.go:141] libmachine: (ha-025067-m02) Waiting to get IP...
	I0422 17:17:50.667102   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:50.667502   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:50.667561   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:50.667491   31195 retry.go:31] will retry after 301.277138ms: waiting for machine to come up
	I0422 17:17:50.970032   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:50.970425   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:50.970476   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:50.970385   31195 retry.go:31] will retry after 336.847099ms: waiting for machine to come up
	I0422 17:17:51.309141   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:51.309579   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:51.309603   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:51.309517   31195 retry.go:31] will retry after 293.927768ms: waiting for machine to come up
	I0422 17:17:51.605249   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:51.605761   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:51.605784   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:51.605714   31195 retry.go:31] will retry after 379.885385ms: waiting for machine to come up
	I0422 17:17:51.987196   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:51.987549   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:51.987570   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:51.987520   31195 retry.go:31] will retry after 520.525548ms: waiting for machine to come up
	I0422 17:17:52.509209   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:52.509674   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:52.509697   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:52.509635   31195 retry.go:31] will retry after 711.500166ms: waiting for machine to come up
	I0422 17:17:53.222388   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:53.222875   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:53.222911   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:53.222805   31195 retry.go:31] will retry after 831.419751ms: waiting for machine to come up
	I0422 17:17:54.057220   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:54.057699   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:54.057746   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:54.057651   31195 retry.go:31] will retry after 1.278962374s: waiting for machine to come up
	I0422 17:17:55.338427   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:55.339058   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:55.339086   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:55.339005   31195 retry.go:31] will retry after 1.432428767s: waiting for machine to come up
	I0422 17:17:56.773315   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:56.773745   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:56.773771   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:56.773708   31195 retry.go:31] will retry after 1.431656718s: waiting for machine to come up
	I0422 17:17:58.206743   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:17:58.207257   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:17:58.207287   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:17:58.207208   31195 retry.go:31] will retry after 1.95615804s: waiting for machine to come up
	I0422 17:18:00.165373   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:00.165998   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:18:00.166025   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:18:00.165961   31195 retry.go:31] will retry after 2.219203379s: waiting for machine to come up
	I0422 17:18:02.388264   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:02.388717   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:18:02.388746   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:18:02.388639   31195 retry.go:31] will retry after 3.64058761s: waiting for machine to come up
	I0422 17:18:06.031722   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:06.032202   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find current IP address of domain ha-025067-m02 in network mk-ha-025067
	I0422 17:18:06.032232   30338 main.go:141] libmachine: (ha-025067-m02) DBG | I0422 17:18:06.032159   31195 retry.go:31] will retry after 5.444187126s: waiting for machine to come up
	I0422 17:18:11.479729   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.480279   30338 main.go:141] libmachine: (ha-025067-m02) Found IP for machine: 192.168.39.56
	I0422 17:18:11.480303   30338 main.go:141] libmachine: (ha-025067-m02) Reserving static IP address...
	I0422 17:18:11.480318   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has current primary IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.480642   30338 main.go:141] libmachine: (ha-025067-m02) DBG | unable to find host DHCP lease matching {name: "ha-025067-m02", mac: "52:54:00:f3:68:d1", ip: "192.168.39.56"} in network mk-ha-025067
	I0422 17:18:11.550688   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Getting to WaitForSSH function...
	I0422 17:18:11.550718   30338 main.go:141] libmachine: (ha-025067-m02) Reserved static IP address: 192.168.39.56
	I0422 17:18:11.550747   30338 main.go:141] libmachine: (ha-025067-m02) Waiting for SSH to be available...
	I0422 17:18:11.553229   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.553630   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.553658   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.554001   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Using SSH client type: external
	I0422 17:18:11.554039   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa (-rw-------)
	I0422 17:18:11.554067   30338 main.go:141] libmachine: (ha-025067-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.56 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 17:18:11.554081   30338 main.go:141] libmachine: (ha-025067-m02) DBG | About to run SSH command:
	I0422 17:18:11.554094   30338 main.go:141] libmachine: (ha-025067-m02) DBG | exit 0
	I0422 17:18:11.679377   30338 main.go:141] libmachine: (ha-025067-m02) DBG | SSH cmd err, output: <nil>: 
	I0422 17:18:11.679618   30338 main.go:141] libmachine: (ha-025067-m02) KVM machine creation complete!
	I0422 17:18:11.679935   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetConfigRaw
	I0422 17:18:11.680482   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:11.680683   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:11.680837   30338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 17:18:11.680851   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:18:11.682080   30338 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 17:18:11.682098   30338 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 17:18:11.682105   30338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 17:18:11.682114   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:11.684353   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.684726   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.684755   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.684907   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:11.685071   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.685254   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.685415   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:11.685582   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:11.685773   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:11.685785   30338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 17:18:11.794800   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:18:11.794822   30338 main.go:141] libmachine: Detecting the provisioner...
	I0422 17:18:11.794828   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:11.797776   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.798206   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.798245   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.798382   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:11.798584   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.798743   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.798903   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:11.799169   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:11.799391   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:11.799410   30338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 17:18:11.907877   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 17:18:11.907954   30338 main.go:141] libmachine: found compatible host: buildroot
	I0422 17:18:11.907967   30338 main.go:141] libmachine: Provisioning with buildroot...
	I0422 17:18:11.907978   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:18:11.908248   30338 buildroot.go:166] provisioning hostname "ha-025067-m02"
	I0422 17:18:11.908271   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:18:11.908448   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:11.911106   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.911484   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:11.911517   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:11.911646   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:11.911837   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.911987   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:11.912142   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:11.912294   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:11.912525   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:11.912549   30338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067-m02 && echo "ha-025067-m02" | sudo tee /etc/hostname
	I0422 17:18:12.035161   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067-m02
	
	I0422 17:18:12.035190   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.037839   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.038117   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.038157   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.038312   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.038574   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.038754   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.038930   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.039092   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:12.039396   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:12.039424   30338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:18:12.157161   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:18:12.157193   30338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:18:12.157210   30338 buildroot.go:174] setting up certificates
	I0422 17:18:12.157221   30338 provision.go:84] configureAuth start
	I0422 17:18:12.157233   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetMachineName
	I0422 17:18:12.157506   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:12.160150   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.160512   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.160540   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.160713   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.162801   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.163089   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.163108   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.163267   30338 provision.go:143] copyHostCerts
	I0422 17:18:12.163294   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:18:12.163330   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:18:12.163343   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:18:12.163429   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:18:12.163530   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:18:12.163554   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:18:12.163561   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:18:12.163588   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:18:12.163637   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:18:12.163653   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:18:12.163656   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:18:12.163676   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:18:12.163718   30338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067-m02 san=[127.0.0.1 192.168.39.56 ha-025067-m02 localhost minikube]
	I0422 17:18:12.318423   30338 provision.go:177] copyRemoteCerts
	I0422 17:18:12.318475   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:18:12.318503   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.321344   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.321682   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.321723   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.321859   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.322043   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.322178   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.322358   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:12.406005   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:18:12.406072   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:18:12.431931   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:18:12.432041   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 17:18:12.456622   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:18:12.456683   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 17:18:12.482344   30338 provision.go:87] duration metric: took 325.111637ms to configureAuth
	I0422 17:18:12.482368   30338 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:18:12.482570   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:18:12.482649   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.485568   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.486114   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.486143   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.486309   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.486485   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.486652   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.486795   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.486947   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:12.487112   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:12.487151   30338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:18:12.759364   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:18:12.759394   30338 main.go:141] libmachine: Checking connection to Docker...
	I0422 17:18:12.759404   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetURL
	I0422 17:18:12.760603   30338 main.go:141] libmachine: (ha-025067-m02) DBG | Using libvirt version 6000000
	I0422 17:18:12.762733   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.763029   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.763070   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.763218   30338 main.go:141] libmachine: Docker is up and running!
	I0422 17:18:12.763240   30338 main.go:141] libmachine: Reticulating splines...
	I0422 17:18:12.763247   30338 client.go:171] duration metric: took 23.693681012s to LocalClient.Create
	I0422 17:18:12.763278   30338 start.go:167] duration metric: took 23.693749721s to libmachine.API.Create "ha-025067"
	I0422 17:18:12.763288   30338 start.go:293] postStartSetup for "ha-025067-m02" (driver="kvm2")
	I0422 17:18:12.763298   30338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:18:12.763314   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:12.763556   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:18:12.763577   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.765458   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.765721   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.765748   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.765899   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.766068   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.766210   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.766321   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:12.851971   30338 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:18:12.856551   30338 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:18:12.856574   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:18:12.856626   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:18:12.856689   30338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:18:12.856700   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:18:12.856777   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:18:12.866934   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:18:12.892487   30338 start.go:296] duration metric: took 129.185273ms for postStartSetup
	I0422 17:18:12.892548   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetConfigRaw
	I0422 17:18:12.893179   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:12.895712   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.896057   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.896088   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.896343   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:18:12.896525   30338 start.go:128] duration metric: took 23.845835741s to createHost
	I0422 17:18:12.896550   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:12.898898   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.899256   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:12.899276   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:12.899464   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:12.899631   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.899749   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:12.899839   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:12.900026   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:18:12.900214   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.56 22 <nil> <nil>}
	I0422 17:18:12.900229   30338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:18:13.008347   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806292.979382927
	
	I0422 17:18:13.008366   30338 fix.go:216] guest clock: 1713806292.979382927
	I0422 17:18:13.008373   30338 fix.go:229] Guest: 2024-04-22 17:18:12.979382927 +0000 UTC Remote: 2024-04-22 17:18:12.896537372 +0000 UTC m=+80.403805215 (delta=82.845555ms)
	I0422 17:18:13.008387   30338 fix.go:200] guest clock delta is within tolerance: 82.845555ms
	I0422 17:18:13.008391   30338 start.go:83] releasing machines lock for "ha-025067-m02", held for 23.957790272s
	I0422 17:18:13.008406   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.008671   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:13.011031   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.011471   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:13.011501   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.013712   30338 out.go:177] * Found network options:
	I0422 17:18:13.015255   30338 out.go:177]   - NO_PROXY=192.168.39.22
	W0422 17:18:13.016682   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:18:13.016711   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.017188   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.017361   30338 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:18:13.017457   30338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:18:13.017491   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	W0422 17:18:13.017567   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:18:13.017625   30338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:18:13.017640   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:18:13.020234   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020346   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020583   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:13.020611   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020741   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:13.020828   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:13.020860   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:13.020911   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:13.020990   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:18:13.021066   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:13.021136   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:18:13.021201   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:13.021247   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:18:13.021372   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:18:13.260969   30338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:18:13.267814   30338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:18:13.267886   30338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:18:13.284474   30338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 17:18:13.284503   30338 start.go:494] detecting cgroup driver to use...
	I0422 17:18:13.284577   30338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:18:13.301433   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:18:13.316405   30338 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:18:13.316458   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:18:13.331387   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:18:13.346005   30338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:18:13.471058   30338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:18:13.632838   30338 docker.go:233] disabling docker service ...
	I0422 17:18:13.632909   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:18:13.647289   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:18:13.660863   30338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:18:13.793794   30338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:18:13.944560   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:18:13.959303   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:18:13.979204   30338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:18:13.979272   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:13.990469   30338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:18:13.990522   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.001643   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.012754   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.023974   30338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:18:14.035655   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.048156   30338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.067227   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:18:14.079560   30338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:18:14.090879   30338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 17:18:14.090941   30338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 17:18:14.105159   30338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:18:14.116709   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:18:14.243784   30338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:18:14.387806   30338 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:18:14.387882   30338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:18:14.394211   30338 start.go:562] Will wait 60s for crictl version
	I0422 17:18:14.394286   30338 ssh_runner.go:195] Run: which crictl
	I0422 17:18:14.398222   30338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:18:14.435419   30338 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:18:14.435502   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:18:14.465373   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:18:14.498246   30338 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:18:14.499750   30338 out.go:177]   - env NO_PROXY=192.168.39.22
	I0422 17:18:14.501194   30338 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:18:14.503676   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:14.504065   30338 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:18:04 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:18:14.504096   30338 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:18:14.504367   30338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:18:14.509086   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:18:14.525938   30338 mustload.go:65] Loading cluster: ha-025067
	I0422 17:18:14.526231   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:18:14.526625   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:18:14.526683   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:18:14.541354   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38241
	I0422 17:18:14.541793   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:18:14.542270   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:18:14.542292   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:18:14.542641   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:18:14.542784   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:18:14.544411   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:18:14.544687   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:18:14.544720   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:18:14.558909   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40089
	I0422 17:18:14.559288   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:18:14.559734   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:18:14.559754   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:18:14.560057   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:18:14.560230   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:18:14.560411   30338 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.56
	I0422 17:18:14.560423   30338 certs.go:194] generating shared ca certs ...
	I0422 17:18:14.560441   30338 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:18:14.560558   30338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:18:14.560593   30338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:18:14.560602   30338 certs.go:256] generating profile certs ...
	I0422 17:18:14.560661   30338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:18:14.560684   30338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db
	I0422 17:18:14.560698   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.56 192.168.39.254]
	I0422 17:18:14.748385   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db ...
	I0422 17:18:14.748418   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db: {Name:mkb92a6fdff09c9dea3d22aedf18d5db4bbbc5e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:18:14.748613   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db ...
	I0422 17:18:14.748631   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db: {Name:mkb907f809d28a0e996ba56e8d5ef1ee7be2bc57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:18:14.748731   30338 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.6e4734db -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:18:14.748899   30338 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.6e4734db -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:18:14.749065   30338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:18:14.749085   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:18:14.749103   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:18:14.749120   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:18:14.749139   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:18:14.749172   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:18:14.749205   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:18:14.749225   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:18:14.749243   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:18:14.749302   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:18:14.749342   30338 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:18:14.749357   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:18:14.749393   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:18:14.749424   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:18:14.749454   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:18:14.749514   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:18:14.749556   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:18:14.749576   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:14.749595   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:18:14.749634   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:18:14.752745   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:14.753127   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:18:14.753153   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:14.753301   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:18:14.753562   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:18:14.753707   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:18:14.753837   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:18:14.831489   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 17:18:14.838744   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 17:18:14.853986   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 17:18:14.858936   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 17:18:14.872716   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 17:18:14.878795   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 17:18:14.892467   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 17:18:14.897410   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 17:18:14.908528   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 17:18:14.913020   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 17:18:14.926200   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 17:18:14.931479   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0422 17:18:14.943951   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:18:14.969496   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:18:14.993368   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:18:15.017283   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:18:15.041614   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 17:18:15.066195   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 17:18:15.093045   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:18:15.119452   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:18:15.145896   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:18:15.172473   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:18:15.197815   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:18:15.223804   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 17:18:15.242470   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 17:18:15.260737   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 17:18:15.278450   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 17:18:15.295819   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 17:18:15.314207   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0422 17:18:15.333047   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 17:18:15.351644   30338 ssh_runner.go:195] Run: openssl version
	I0422 17:18:15.357445   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:18:15.369147   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:18:15.373944   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:18:15.373997   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:18:15.379931   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:18:15.391585   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:18:15.403104   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:18:15.407990   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:18:15.408037   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:18:15.414011   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:18:15.426928   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:18:15.440211   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:15.445267   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:15.445327   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:18:15.451531   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:18:15.463491   30338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:18:15.467879   30338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 17:18:15.467926   30338 kubeadm.go:928] updating node {m02 192.168.39.56 8443 v1.30.0 crio true true} ...
	I0422 17:18:15.468000   30338 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.56
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:18:15.468023   30338 kube-vip.go:111] generating kube-vip config ...
	I0422 17:18:15.468058   30338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:18:15.485053   30338 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:18:15.485118   30338 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:18:15.485175   30338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:18:15.496207   30338 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 17:18:15.496273   30338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 17:18:15.508457   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 17:18:15.508484   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:18:15.508543   30338 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet
	I0422 17:18:15.508574   30338 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm
	I0422 17:18:15.508554   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:18:15.513360   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 17:18:15.513388   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 17:18:16.338624   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:18:16.338695   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:18:16.344483   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 17:18:16.344518   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 17:18:16.661755   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:18:16.678827   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:18:16.678921   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:18:16.683453   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 17:18:16.683481   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 17:18:17.126421   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 17:18:17.136530   30338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 17:18:17.154383   30338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:18:17.172209   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 17:18:17.190007   30338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:18:17.194247   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:18:17.207348   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:18:17.331117   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:18:17.348555   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:18:17.349012   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:18:17.349068   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:18:17.363594   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0422 17:18:17.364056   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:18:17.364557   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:18:17.364581   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:18:17.364874   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:18:17.365050   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:18:17.365157   30338 start.go:316] joinCluster: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:18:17.365276   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 17:18:17.365289   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:18:17.368150   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:17.368603   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:18:17.368635   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:18:17.368785   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:18:17.368960   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:18:17.369205   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:18:17.369361   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:18:17.627712   30338 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:18:17.627758   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pcnw2.r0uhk8w13xxqxqvc --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m02 --control-plane --apiserver-advertise-address=192.168.39.56 --apiserver-bind-port=8443"
	I0422 17:18:40.513292   30338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1pcnw2.r0uhk8w13xxqxqvc --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m02 --control-plane --apiserver-advertise-address=192.168.39.56 --apiserver-bind-port=8443": (22.885505336s)
	I0422 17:18:40.513329   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 17:18:41.124919   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-025067-m02 minikube.k8s.io/updated_at=2024_04_22T17_18_41_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=ha-025067 minikube.k8s.io/primary=false
	I0422 17:18:41.286158   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-025067-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 17:18:41.415448   30338 start.go:318] duration metric: took 24.050283777s to joinCluster
	I0422 17:18:41.415532   30338 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:18:41.417217   30338 out.go:177] * Verifying Kubernetes components...
	I0422 17:18:41.415817   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:18:41.418824   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:18:41.719944   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:18:41.801607   30338 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:18:41.801821   30338 kapi.go:59] client config for ha-025067: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 17:18:41.801880   30338 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.22:8443
	I0422 17:18:41.802044   30338 node_ready.go:35] waiting up to 6m0s for node "ha-025067-m02" to be "Ready" ...
	I0422 17:18:41.802127   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:41.802135   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:41.802143   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:41.802146   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:41.813295   30338 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0422 17:18:42.303296   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:42.303320   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:42.303330   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:42.303336   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:42.306497   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:42.802979   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:42.803005   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:42.803017   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:42.803024   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:42.807306   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:43.303029   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:43.303049   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:43.303057   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:43.303060   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:43.307003   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:43.803281   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:43.803327   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:43.803335   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:43.803339   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:43.812036   30338 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 17:18:43.812788   30338 node_ready.go:53] node "ha-025067-m02" has status "Ready":"False"
	I0422 17:18:44.302366   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:44.302388   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:44.302396   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:44.302401   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:44.306400   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:44.802285   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:44.802312   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:44.802323   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:44.802327   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:44.806236   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:45.303263   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:45.303286   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:45.303294   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:45.303298   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:45.306644   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:45.802342   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:45.802375   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:45.802387   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:45.802392   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:45.805794   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:46.303159   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:46.303184   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:46.303192   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:46.303195   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:46.307759   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:46.308529   30338 node_ready.go:53] node "ha-025067-m02" has status "Ready":"False"
	I0422 17:18:46.803027   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:46.803058   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:46.803068   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:46.803074   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:46.806330   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:47.303231   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:47.303268   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:47.303276   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:47.303281   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:47.307625   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:47.802259   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:47.802314   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:47.802325   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:47.802334   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:47.805691   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.303154   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:48.303181   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.303192   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.303196   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.306489   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.802850   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:48.802876   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.802888   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.802896   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.809475   30338 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 17:18:48.810085   30338 node_ready.go:49] node "ha-025067-m02" has status "Ready":"True"
	I0422 17:18:48.810116   30338 node_ready.go:38] duration metric: took 7.008048903s for node "ha-025067-m02" to be "Ready" ...
	I0422 17:18:48.810131   30338 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:18:48.810232   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:48.810243   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.810250   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.810254   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.815491   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:48.822373   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.822467   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nswqp
	I0422 17:18:48.822483   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.822494   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.822499   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.825459   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.826264   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:48.826280   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.826286   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.826291   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.829367   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.830213   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:48.830232   30338 pod_ready.go:81] duration metric: took 7.833056ms for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.830241   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.830289   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vrl4h
	I0422 17:18:48.830298   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.830305   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.830310   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.833234   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.834049   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:48.834062   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.834070   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.834076   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.837356   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:48.837820   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:48.837840   30338 pod_ready.go:81] duration metric: took 7.592161ms for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.837852   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.837913   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067
	I0422 17:18:48.837924   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.837933   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.837940   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.840217   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.840844   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:48.840862   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.840871   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.840875   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.842868   30338 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 17:18:48.843420   30338 pod_ready.go:92] pod "etcd-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:48.843435   30338 pod_ready.go:81] duration metric: took 5.575474ms for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.843442   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:48.843496   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:48.843504   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.843510   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.843517   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.846010   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:48.846815   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:48.846830   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:48.846836   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:48.846840   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:48.848962   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:49.344421   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:49.344450   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.344461   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.344468   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.409699   30338 round_trippers.go:574] Response Status: 200 OK in 65 milliseconds
	I0422 17:18:49.410869   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:49.410889   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.410896   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.410900   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.413510   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:49.844304   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:49.844327   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.844334   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.844341   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.847973   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:49.848836   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:49.848851   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:49.848857   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:49.848861   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:49.851243   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:50.344106   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:50.344129   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.344138   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.344155   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.347503   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:50.348114   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:50.348128   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.348135   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.348140   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.350729   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:50.843724   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:50.843745   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.843754   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.843757   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.847358   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:50.848008   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:50.848029   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:50.848039   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:50.848044   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:50.850968   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:50.851859   30338 pod_ready.go:102] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 17:18:51.344344   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:51.344367   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.344374   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.344379   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.348369   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:51.349069   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:51.349085   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.349095   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.349102   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.351896   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:51.843712   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:51.843734   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.843742   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.843745   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.847100   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:51.848256   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:51.848275   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:51.848286   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:51.848290   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:51.851228   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:52.344416   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:52.344448   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.344459   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.344464   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.348046   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:52.348910   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:52.348925   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.348935   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.348940   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.352502   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:52.843652   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:52.843681   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.843692   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.843697   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.846959   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:52.847851   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:52.847872   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:52.847882   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:52.847887   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:52.852101   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:52.852823   30338 pod_ready.go:102] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 17:18:53.344307   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:18:53.344330   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.344351   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.344356   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.347668   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:53.348336   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:53.348350   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.348357   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.348361   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.350905   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.351681   30338 pod_ready.go:92] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.351699   30338 pod_ready.go:81] duration metric: took 4.508251275s for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.351712   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.351763   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067
	I0422 17:18:53.351770   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.351777   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.351783   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.354640   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.355363   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:53.355382   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.355389   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.355392   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.357695   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.358179   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.358196   30338 pod_ready.go:81] duration metric: took 6.478929ms for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.358204   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.358242   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m02
	I0422 17:18:53.358246   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.358253   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.358257   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.360805   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.361356   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:53.361370   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.361376   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.361379   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.363591   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:53.364035   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.364056   30338 pod_ready.go:81] duration metric: took 5.842627ms for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.364064   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.403397   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067
	I0422 17:18:53.403419   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.403434   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.403438   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.406511   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:53.603464   30338 request.go:629] Waited for 196.351505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:53.603544   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:53.603552   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.603562   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.603569   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.606668   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:53.607255   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:53.607272   30338 pod_ready.go:81] duration metric: took 243.202638ms for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.607281   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:53.803734   30338 request.go:629] Waited for 196.394465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:53.803808   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:53.803814   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:53.803822   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:53.803828   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:53.808005   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:54.003359   30338 request.go:629] Waited for 194.356848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.003417   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.003421   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.003429   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.003433   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.007606   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:54.203188   30338 request.go:629] Waited for 95.250873ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:54.203244   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:54.203249   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.203256   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.203260   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.207278   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:54.403755   30338 request.go:629] Waited for 195.431105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.403831   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.403857   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.403870   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.403879   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.407442   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:54.608047   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:54.608074   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.608084   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.608090   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.615023   30338 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 17:18:54.803345   30338 request.go:629] Waited for 187.367934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.803412   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:54.803420   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:54.803427   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:54.803433   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:54.806995   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.108127   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:55.108157   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.108166   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.108173   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.111186   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:55.203259   30338 request.go:629] Waited for 91.300258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:55.203348   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:55.203356   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.203365   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.203373   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.207365   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.608435   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:55.608461   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.608469   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.608472   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.611909   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.612984   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:55.613002   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:55.613011   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:55.613017   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:55.616188   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:55.616950   30338 pod_ready.go:102] pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace has status "Ready":"False"
	I0422 17:18:56.108277   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:18:56.108304   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.108317   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.108322   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.113671   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:56.114804   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:56.114818   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.114826   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.114830   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.119275   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:56.120045   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:56.120063   30338 pod_ready.go:81] duration metric: took 2.512776333s for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.120073   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.203399   30338 request.go:629] Waited for 83.267456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:18:56.203514   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:18:56.203523   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.203531   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.203534   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.206848   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:56.403282   30338 request.go:629] Waited for 195.389862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:56.403337   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:56.403347   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.403354   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.403358   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.407782   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:56.408681   30338 pod_ready.go:92] pod "kube-proxy-dk5ww" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:56.408699   30338 pod_ready.go:81] duration metric: took 288.619685ms for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.408708   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.603167   30338 request.go:629] Waited for 194.396956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:18:56.603223   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:18:56.603238   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.603245   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.603249   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.606228   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:56.803462   30338 request.go:629] Waited for 196.39236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:56.803516   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:56.803521   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:56.803528   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:56.803532   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:56.807268   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:56.808023   30338 pod_ready.go:92] pod "kube-proxy-pf7cc" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:56.808043   30338 pod_ready.go:81] duration metric: took 399.329212ms for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:56.808052   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:57.003146   30338 request.go:629] Waited for 195.007817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:18:57.003226   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:18:57.003235   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.003244   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.003249   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.007292   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:57.203609   30338 request.go:629] Waited for 195.392162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:57.203694   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:18:57.203702   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.203714   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.203728   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.207763   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:18:57.209178   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:57.209198   30338 pod_ready.go:81] duration metric: took 401.138629ms for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:57.209217   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:57.403804   30338 request.go:629] Waited for 194.516914ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.403870   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.403878   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.403887   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.403893   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.407542   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:57.603053   30338 request.go:629] Waited for 194.24827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:57.603109   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:57.603153   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.603165   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.603169   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.605891   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:57.803089   30338 request.go:629] Waited for 93.263261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.803177   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:57.803186   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:57.803193   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:57.803197   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:57.806591   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.003745   30338 request.go:629] Waited for 196.38252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.003822   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.003830   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.003841   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.003850   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.007525   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.209401   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:58.209423   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.209431   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.209435   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.212551   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.403558   30338 request.go:629] Waited for 190.383656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.403646   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.403654   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.403661   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.403668   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.407607   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.710412   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:58.710442   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.710450   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.710453   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.714043   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:58.802904   30338 request.go:629] Waited for 88.185936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.802976   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:58.802981   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:58.802988   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:58.802997   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:58.806875   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:59.210350   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:18:59.210373   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.210384   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.210389   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.213785   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:18:59.214661   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:18:59.214675   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.214683   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.214689   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.217426   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:18:59.218276   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:18:59.218294   30338 pod_ready.go:81] duration metric: took 2.00906977s for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:18:59.218304   30338 pod_ready.go:38] duration metric: took 10.408148984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:18:59.218317   30338 api_server.go:52] waiting for apiserver process to appear ...
	I0422 17:18:59.218366   30338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:18:59.235446   30338 api_server.go:72] duration metric: took 17.819876382s to wait for apiserver process to appear ...
	I0422 17:18:59.235477   30338 api_server.go:88] waiting for apiserver healthz status ...
	I0422 17:18:59.235499   30338 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0422 17:18:59.239726   30338 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0422 17:18:59.239804   30338 round_trippers.go:463] GET https://192.168.39.22:8443/version
	I0422 17:18:59.239812   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.239827   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.239836   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.240823   30338 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0422 17:18:59.240919   30338 api_server.go:141] control plane version: v1.30.0
	I0422 17:18:59.240937   30338 api_server.go:131] duration metric: took 5.451788ms to wait for apiserver health ...
	I0422 17:18:59.240947   30338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 17:18:59.403321   30338 request.go:629] Waited for 162.311156ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.403429   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.403439   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.403447   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.403461   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.409326   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:59.416517   30338 system_pods.go:59] 17 kube-system pods found
	I0422 17:18:59.416558   30338 system_pods.go:61] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:18:59.416572   30338 system_pods.go:61] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:18:59.416576   30338 system_pods.go:61] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:18:59.416579   30338 system_pods.go:61] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:18:59.416582   30338 system_pods.go:61] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:18:59.416585   30338 system_pods.go:61] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:18:59.416588   30338 system_pods.go:61] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:18:59.416591   30338 system_pods.go:61] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:18:59.416594   30338 system_pods.go:61] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:18:59.416597   30338 system_pods.go:61] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:18:59.416602   30338 system_pods.go:61] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:18:59.416606   30338 system_pods.go:61] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:18:59.416611   30338 system_pods.go:61] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:18:59.416630   30338 system_pods.go:61] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:18:59.416634   30338 system_pods.go:61] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:18:59.416638   30338 system_pods.go:61] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:18:59.416640   30338 system_pods.go:61] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:18:59.416647   30338 system_pods.go:74] duration metric: took 175.690311ms to wait for pod list to return data ...
	I0422 17:18:59.416656   30338 default_sa.go:34] waiting for default service account to be created ...
	I0422 17:18:59.603182   30338 request.go:629] Waited for 186.452528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:18:59.603242   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:18:59.603249   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.603258   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.603264   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.608475   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:59.608696   30338 default_sa.go:45] found service account: "default"
	I0422 17:18:59.608712   30338 default_sa.go:55] duration metric: took 192.047543ms for default service account to be created ...
	I0422 17:18:59.608720   30338 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 17:18:59.803194   30338 request.go:629] Waited for 194.417195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.803251   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:18:59.803256   30338 round_trippers.go:469] Request Headers:
	I0422 17:18:59.803263   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:18:59.803266   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:18:59.809180   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:18:59.813316   30338 system_pods.go:86] 17 kube-system pods found
	I0422 17:18:59.813345   30338 system_pods.go:89] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:18:59.813350   30338 system_pods.go:89] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:18:59.813355   30338 system_pods.go:89] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:18:59.813358   30338 system_pods.go:89] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:18:59.813363   30338 system_pods.go:89] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:18:59.813367   30338 system_pods.go:89] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:18:59.813370   30338 system_pods.go:89] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:18:59.813374   30338 system_pods.go:89] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:18:59.813377   30338 system_pods.go:89] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:18:59.813381   30338 system_pods.go:89] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:18:59.813385   30338 system_pods.go:89] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:18:59.813389   30338 system_pods.go:89] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:18:59.813392   30338 system_pods.go:89] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:18:59.813396   30338 system_pods.go:89] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:18:59.813399   30338 system_pods.go:89] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:18:59.813402   30338 system_pods.go:89] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:18:59.813405   30338 system_pods.go:89] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:18:59.813411   30338 system_pods.go:126] duration metric: took 204.687482ms to wait for k8s-apps to be running ...
	I0422 17:18:59.813420   30338 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 17:18:59.813465   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:18:59.829589   30338 system_svc.go:56] duration metric: took 16.160392ms WaitForService to wait for kubelet
	I0422 17:18:59.829616   30338 kubeadm.go:576] duration metric: took 18.414051448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:18:59.829634   30338 node_conditions.go:102] verifying NodePressure condition ...
	I0422 17:19:00.002907   30338 request.go:629] Waited for 173.204088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes
	I0422 17:19:00.002991   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes
	I0422 17:19:00.002998   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:00.003008   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:00.003016   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:00.006533   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:19:00.007192   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:19:00.007213   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:19:00.007226   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:19:00.007231   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:19:00.007236   30338 node_conditions.go:105] duration metric: took 177.597848ms to run NodePressure ...
	I0422 17:19:00.007250   30338 start.go:240] waiting for startup goroutines ...
	I0422 17:19:00.007277   30338 start.go:254] writing updated cluster config ...
	I0422 17:19:00.010089   30338 out.go:177] 
	I0422 17:19:00.011879   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:00.011986   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:19:00.013767   30338 out.go:177] * Starting "ha-025067-m03" control-plane node in "ha-025067" cluster
	I0422 17:19:00.014985   30338 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:19:00.015009   30338 cache.go:56] Caching tarball of preloaded images
	I0422 17:19:00.015114   30338 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:19:00.015141   30338 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:19:00.015243   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:19:00.015426   30338 start.go:360] acquireMachinesLock for ha-025067-m03: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:19:00.015487   30338 start.go:364] duration metric: took 39.538µs to acquireMachinesLock for "ha-025067-m03"
	I0422 17:19:00.015511   30338 start.go:93] Provisioning new machine with config: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:19:00.015619   30338 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0422 17:19:00.017385   30338 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 17:19:00.017486   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:00.017526   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:00.032459   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34859
	I0422 17:19:00.032874   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:00.033374   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:00.033393   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:00.033721   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:00.033907   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:00.034008   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:00.034222   30338 start.go:159] libmachine.API.Create for "ha-025067" (driver="kvm2")
	I0422 17:19:00.034270   30338 client.go:168] LocalClient.Create starting
	I0422 17:19:00.034314   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 17:19:00.034355   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:19:00.034374   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:19:00.034438   30338 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 17:19:00.034466   30338 main.go:141] libmachine: Decoding PEM data...
	I0422 17:19:00.034482   30338 main.go:141] libmachine: Parsing certificate...
	I0422 17:19:00.034510   30338 main.go:141] libmachine: Running pre-create checks...
	I0422 17:19:00.034521   30338 main.go:141] libmachine: (ha-025067-m03) Calling .PreCreateCheck
	I0422 17:19:00.034759   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetConfigRaw
	I0422 17:19:00.035234   30338 main.go:141] libmachine: Creating machine...
	I0422 17:19:00.035252   30338 main.go:141] libmachine: (ha-025067-m03) Calling .Create
	I0422 17:19:00.035398   30338 main.go:141] libmachine: (ha-025067-m03) Creating KVM machine...
	I0422 17:19:00.036655   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found existing default KVM network
	I0422 17:19:00.036752   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found existing private KVM network mk-ha-025067
	I0422 17:19:00.036922   30338 main.go:141] libmachine: (ha-025067-m03) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03 ...
	I0422 17:19:00.036945   30338 main.go:141] libmachine: (ha-025067-m03) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 17:19:00.037001   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.036880   31595 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:19:00.037065   30338 main.go:141] libmachine: (ha-025067-m03) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 17:19:00.246743   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.246609   31595 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa...
	I0422 17:19:00.355574   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.355473   31595 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/ha-025067-m03.rawdisk...
	I0422 17:19:00.355598   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Writing magic tar header
	I0422 17:19:00.355609   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Writing SSH key tar header
	I0422 17:19:00.355617   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:00.355577   31595 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03 ...
	I0422 17:19:00.355676   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03
	I0422 17:19:00.355691   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 17:19:00.355700   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03 (perms=drwx------)
	I0422 17:19:00.355749   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:19:00.355774   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 17:19:00.355790   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 17:19:00.355805   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 17:19:00.355824   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 17:19:00.355834   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home/jenkins
	I0422 17:19:00.355851   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 17:19:00.355865   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 17:19:00.355875   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Checking permissions on dir: /home
	I0422 17:19:00.355892   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Skipping /home - not owner
	I0422 17:19:00.355908   30338 main.go:141] libmachine: (ha-025067-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 17:19:00.355919   30338 main.go:141] libmachine: (ha-025067-m03) Creating domain...
	I0422 17:19:00.356771   30338 main.go:141] libmachine: (ha-025067-m03) define libvirt domain using xml: 
	I0422 17:19:00.356815   30338 main.go:141] libmachine: (ha-025067-m03) <domain type='kvm'>
	I0422 17:19:00.356831   30338 main.go:141] libmachine: (ha-025067-m03)   <name>ha-025067-m03</name>
	I0422 17:19:00.356847   30338 main.go:141] libmachine: (ha-025067-m03)   <memory unit='MiB'>2200</memory>
	I0422 17:19:00.356860   30338 main.go:141] libmachine: (ha-025067-m03)   <vcpu>2</vcpu>
	I0422 17:19:00.356870   30338 main.go:141] libmachine: (ha-025067-m03)   <features>
	I0422 17:19:00.356881   30338 main.go:141] libmachine: (ha-025067-m03)     <acpi/>
	I0422 17:19:00.356890   30338 main.go:141] libmachine: (ha-025067-m03)     <apic/>
	I0422 17:19:00.356898   30338 main.go:141] libmachine: (ha-025067-m03)     <pae/>
	I0422 17:19:00.356903   30338 main.go:141] libmachine: (ha-025067-m03)     
	I0422 17:19:00.356909   30338 main.go:141] libmachine: (ha-025067-m03)   </features>
	I0422 17:19:00.356913   30338 main.go:141] libmachine: (ha-025067-m03)   <cpu mode='host-passthrough'>
	I0422 17:19:00.356918   30338 main.go:141] libmachine: (ha-025067-m03)   
	I0422 17:19:00.356922   30338 main.go:141] libmachine: (ha-025067-m03)   </cpu>
	I0422 17:19:00.356965   30338 main.go:141] libmachine: (ha-025067-m03)   <os>
	I0422 17:19:00.356993   30338 main.go:141] libmachine: (ha-025067-m03)     <type>hvm</type>
	I0422 17:19:00.357027   30338 main.go:141] libmachine: (ha-025067-m03)     <boot dev='cdrom'/>
	I0422 17:19:00.357049   30338 main.go:141] libmachine: (ha-025067-m03)     <boot dev='hd'/>
	I0422 17:19:00.357063   30338 main.go:141] libmachine: (ha-025067-m03)     <bootmenu enable='no'/>
	I0422 17:19:00.357073   30338 main.go:141] libmachine: (ha-025067-m03)   </os>
	I0422 17:19:00.357088   30338 main.go:141] libmachine: (ha-025067-m03)   <devices>
	I0422 17:19:00.357099   30338 main.go:141] libmachine: (ha-025067-m03)     <disk type='file' device='cdrom'>
	I0422 17:19:00.357114   30338 main.go:141] libmachine: (ha-025067-m03)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/boot2docker.iso'/>
	I0422 17:19:00.357130   30338 main.go:141] libmachine: (ha-025067-m03)       <target dev='hdc' bus='scsi'/>
	I0422 17:19:00.357142   30338 main.go:141] libmachine: (ha-025067-m03)       <readonly/>
	I0422 17:19:00.357149   30338 main.go:141] libmachine: (ha-025067-m03)     </disk>
	I0422 17:19:00.357162   30338 main.go:141] libmachine: (ha-025067-m03)     <disk type='file' device='disk'>
	I0422 17:19:00.357175   30338 main.go:141] libmachine: (ha-025067-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 17:19:00.357190   30338 main.go:141] libmachine: (ha-025067-m03)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/ha-025067-m03.rawdisk'/>
	I0422 17:19:00.357206   30338 main.go:141] libmachine: (ha-025067-m03)       <target dev='hda' bus='virtio'/>
	I0422 17:19:00.357218   30338 main.go:141] libmachine: (ha-025067-m03)     </disk>
	I0422 17:19:00.357229   30338 main.go:141] libmachine: (ha-025067-m03)     <interface type='network'>
	I0422 17:19:00.357243   30338 main.go:141] libmachine: (ha-025067-m03)       <source network='mk-ha-025067'/>
	I0422 17:19:00.357251   30338 main.go:141] libmachine: (ha-025067-m03)       <model type='virtio'/>
	I0422 17:19:00.357263   30338 main.go:141] libmachine: (ha-025067-m03)     </interface>
	I0422 17:19:00.357277   30338 main.go:141] libmachine: (ha-025067-m03)     <interface type='network'>
	I0422 17:19:00.357290   30338 main.go:141] libmachine: (ha-025067-m03)       <source network='default'/>
	I0422 17:19:00.357301   30338 main.go:141] libmachine: (ha-025067-m03)       <model type='virtio'/>
	I0422 17:19:00.357313   30338 main.go:141] libmachine: (ha-025067-m03)     </interface>
	I0422 17:19:00.357330   30338 main.go:141] libmachine: (ha-025067-m03)     <serial type='pty'>
	I0422 17:19:00.357343   30338 main.go:141] libmachine: (ha-025067-m03)       <target port='0'/>
	I0422 17:19:00.357353   30338 main.go:141] libmachine: (ha-025067-m03)     </serial>
	I0422 17:19:00.357361   30338 main.go:141] libmachine: (ha-025067-m03)     <console type='pty'>
	I0422 17:19:00.357366   30338 main.go:141] libmachine: (ha-025067-m03)       <target type='serial' port='0'/>
	I0422 17:19:00.357392   30338 main.go:141] libmachine: (ha-025067-m03)     </console>
	I0422 17:19:00.357412   30338 main.go:141] libmachine: (ha-025067-m03)     <rng model='virtio'>
	I0422 17:19:00.357430   30338 main.go:141] libmachine: (ha-025067-m03)       <backend model='random'>/dev/random</backend>
	I0422 17:19:00.357446   30338 main.go:141] libmachine: (ha-025067-m03)     </rng>
	I0422 17:19:00.357461   30338 main.go:141] libmachine: (ha-025067-m03)     
	I0422 17:19:00.357474   30338 main.go:141] libmachine: (ha-025067-m03)     
	I0422 17:19:00.357487   30338 main.go:141] libmachine: (ha-025067-m03)   </devices>
	I0422 17:19:00.357497   30338 main.go:141] libmachine: (ha-025067-m03) </domain>
	I0422 17:19:00.357511   30338 main.go:141] libmachine: (ha-025067-m03) 
	I0422 17:19:00.365198   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:17:58:f5 in network default
	I0422 17:19:00.366022   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:00.366069   30338 main.go:141] libmachine: (ha-025067-m03) Ensuring networks are active...
	I0422 17:19:00.366777   30338 main.go:141] libmachine: (ha-025067-m03) Ensuring network default is active
	I0422 17:19:00.367175   30338 main.go:141] libmachine: (ha-025067-m03) Ensuring network mk-ha-025067 is active
	I0422 17:19:00.367662   30338 main.go:141] libmachine: (ha-025067-m03) Getting domain xml...
	I0422 17:19:00.368395   30338 main.go:141] libmachine: (ha-025067-m03) Creating domain...
	I0422 17:19:01.598047   30338 main.go:141] libmachine: (ha-025067-m03) Waiting to get IP...
	I0422 17:19:01.598841   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:01.599342   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:01.599379   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:01.599331   31595 retry.go:31] will retry after 244.474614ms: waiting for machine to come up
	I0422 17:19:01.845861   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:01.846396   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:01.846437   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:01.846349   31595 retry.go:31] will retry after 251.22244ms: waiting for machine to come up
	I0422 17:19:02.098746   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:02.099263   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:02.099291   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:02.099213   31595 retry.go:31] will retry after 295.500227ms: waiting for machine to come up
	I0422 17:19:02.396509   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:02.397019   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:02.397049   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:02.396975   31595 retry.go:31] will retry after 482.051032ms: waiting for machine to come up
	I0422 17:19:02.880143   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:02.880651   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:02.880684   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:02.880590   31595 retry.go:31] will retry after 711.029818ms: waiting for machine to come up
	I0422 17:19:03.593368   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:03.593807   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:03.593835   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:03.593755   31595 retry.go:31] will retry after 718.341687ms: waiting for machine to come up
	I0422 17:19:04.313375   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:04.313803   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:04.313886   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:04.313750   31595 retry.go:31] will retry after 747.746364ms: waiting for machine to come up
	I0422 17:19:05.063188   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:05.063669   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:05.063699   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:05.063636   31595 retry.go:31] will retry after 1.482792332s: waiting for machine to come up
	I0422 17:19:06.548134   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:06.548546   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:06.548580   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:06.548508   31595 retry.go:31] will retry after 1.591222295s: waiting for machine to come up
	I0422 17:19:08.141775   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:08.142271   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:08.142299   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:08.142226   31595 retry.go:31] will retry after 1.545760207s: waiting for machine to come up
	I0422 17:19:09.689109   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:09.689528   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:09.689559   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:09.689467   31595 retry.go:31] will retry after 2.68939632s: waiting for machine to come up
	I0422 17:19:12.380233   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:12.380565   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:12.380584   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:12.380538   31595 retry.go:31] will retry after 2.724038671s: waiting for machine to come up
	I0422 17:19:15.106266   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:15.106707   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:15.106730   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:15.106664   31595 retry.go:31] will retry after 3.963134485s: waiting for machine to come up
	I0422 17:19:19.074771   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:19.075307   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find current IP address of domain ha-025067-m03 in network mk-ha-025067
	I0422 17:19:19.075347   30338 main.go:141] libmachine: (ha-025067-m03) DBG | I0422 17:19:19.075256   31595 retry.go:31] will retry after 5.52357941s: waiting for machine to come up
	I0422 17:19:24.601566   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.602004   30338 main.go:141] libmachine: (ha-025067-m03) Found IP for machine: 192.168.39.220
	I0422 17:19:24.602021   30338 main.go:141] libmachine: (ha-025067-m03) Reserving static IP address...
	I0422 17:19:24.602035   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has current primary IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.602411   30338 main.go:141] libmachine: (ha-025067-m03) DBG | unable to find host DHCP lease matching {name: "ha-025067-m03", mac: "52:54:00:d5:51:30", ip: "192.168.39.220"} in network mk-ha-025067
	I0422 17:19:24.675429   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Getting to WaitForSSH function...
	I0422 17:19:24.675461   30338 main.go:141] libmachine: (ha-025067-m03) Reserved static IP address: 192.168.39.220
	I0422 17:19:24.675475   30338 main.go:141] libmachine: (ha-025067-m03) Waiting for SSH to be available...
	I0422 17:19:24.677939   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.678358   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:24.678394   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.678542   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Using SSH client type: external
	I0422 17:19:24.678569   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa (-rw-------)
	I0422 17:19:24.678599   30338 main.go:141] libmachine: (ha-025067-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 17:19:24.678712   30338 main.go:141] libmachine: (ha-025067-m03) DBG | About to run SSH command:
	I0422 17:19:24.678730   30338 main.go:141] libmachine: (ha-025067-m03) DBG | exit 0
	I0422 17:19:24.803345   30338 main.go:141] libmachine: (ha-025067-m03) DBG | SSH cmd err, output: <nil>: 
	I0422 17:19:24.803636   30338 main.go:141] libmachine: (ha-025067-m03) KVM machine creation complete!
	I0422 17:19:24.804017   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetConfigRaw
	I0422 17:19:24.804550   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:24.804756   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:24.804913   30338 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 17:19:24.804928   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:19:24.806319   30338 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 17:19:24.806334   30338 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 17:19:24.806340   30338 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 17:19:24.806345   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:24.808586   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.808971   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:24.808997   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.809143   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:24.809315   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.809463   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.809569   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:24.809744   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:24.809965   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:24.809976   30338 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 17:19:24.918733   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:19:24.918758   30338 main.go:141] libmachine: Detecting the provisioner...
	I0422 17:19:24.918770   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:24.921489   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.921876   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:24.921900   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:24.922070   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:24.922245   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.922432   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:24.922565   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:24.922722   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:24.922879   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:24.922891   30338 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 17:19:25.028496   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 17:19:25.028556   30338 main.go:141] libmachine: found compatible host: buildroot
	I0422 17:19:25.028565   30338 main.go:141] libmachine: Provisioning with buildroot...
	I0422 17:19:25.028575   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:25.028914   30338 buildroot.go:166] provisioning hostname "ha-025067-m03"
	I0422 17:19:25.028945   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:25.029218   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.032170   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.032603   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.032634   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.032869   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.033034   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.033296   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.033491   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.033677   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:25.033861   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:25.033877   30338 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067-m03 && echo "ha-025067-m03" | sudo tee /etc/hostname
	I0422 17:19:25.162873   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067-m03
	
	I0422 17:19:25.162902   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.165681   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.166088   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.166115   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.166350   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.166515   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.166719   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.166863   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.167012   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:25.167263   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:25.167281   30338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:19:25.285404   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:19:25.285436   30338 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:19:25.285457   30338 buildroot.go:174] setting up certificates
	I0422 17:19:25.285476   30338 provision.go:84] configureAuth start
	I0422 17:19:25.285493   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetMachineName
	I0422 17:19:25.285752   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:25.288807   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.289257   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.289288   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.289456   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.291665   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.292124   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.292152   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.292306   30338 provision.go:143] copyHostCerts
	I0422 17:19:25.292341   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:19:25.292381   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:19:25.292403   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:19:25.292466   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:19:25.292541   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:19:25.292558   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:19:25.292565   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:19:25.292587   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:19:25.292629   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:19:25.292645   30338 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:19:25.292652   30338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:19:25.292671   30338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:19:25.292718   30338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067-m03 san=[127.0.0.1 192.168.39.220 ha-025067-m03 localhost minikube]
	I0422 17:19:25.497634   30338 provision.go:177] copyRemoteCerts
	I0422 17:19:25.497698   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:19:25.497719   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.500463   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.500806   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.500841   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.501023   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.501276   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.501474   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.501632   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:25.586916   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:19:25.586991   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 17:19:25.612978   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:19:25.613052   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:19:25.639265   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:19:25.639366   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 17:19:25.665128   30338 provision.go:87] duration metric: took 379.636943ms to configureAuth
	I0422 17:19:25.665156   30338 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:19:25.665381   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:25.665462   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.668354   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.668759   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.668787   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.668967   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.669179   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.669372   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.669526   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.669709   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:25.669861   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:25.669877   30338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:19:25.964438   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:19:25.964471   30338 main.go:141] libmachine: Checking connection to Docker...
	I0422 17:19:25.964482   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetURL
	I0422 17:19:25.965870   30338 main.go:141] libmachine: (ha-025067-m03) DBG | Using libvirt version 6000000
	I0422 17:19:25.968178   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.968500   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.968542   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.968770   30338 main.go:141] libmachine: Docker is up and running!
	I0422 17:19:25.968785   30338 main.go:141] libmachine: Reticulating splines...
	I0422 17:19:25.968792   30338 client.go:171] duration metric: took 25.93451s to LocalClient.Create
	I0422 17:19:25.968818   30338 start.go:167] duration metric: took 25.934601441s to libmachine.API.Create "ha-025067"
	I0422 17:19:25.968830   30338 start.go:293] postStartSetup for "ha-025067-m03" (driver="kvm2")
	I0422 17:19:25.968844   30338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:19:25.968865   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:25.969114   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:19:25.969137   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:25.971550   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.971990   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:25.972007   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:25.972216   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:25.972410   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:25.972559   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:25.972709   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:26.058474   30338 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:19:26.063482   30338 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:19:26.063510   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:19:26.063588   30338 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:19:26.063682   30338 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:19:26.063694   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:19:26.063815   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:19:26.074247   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:19:26.101563   30338 start.go:296] duration metric: took 132.698316ms for postStartSetup
	I0422 17:19:26.101614   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetConfigRaw
	I0422 17:19:26.102182   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:26.105117   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.105507   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.105540   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.105854   30338 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:19:26.106115   30338 start.go:128] duration metric: took 26.090482271s to createHost
	I0422 17:19:26.106145   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:26.108308   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.108669   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.108693   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.108903   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:26.109091   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.109263   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.109441   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:26.109610   30338 main.go:141] libmachine: Using SSH client type: native
	I0422 17:19:26.109766   30338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0422 17:19:26.109776   30338 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:19:26.212431   30338 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806366.186296116
	
	I0422 17:19:26.212458   30338 fix.go:216] guest clock: 1713806366.186296116
	I0422 17:19:26.212467   30338 fix.go:229] Guest: 2024-04-22 17:19:26.186296116 +0000 UTC Remote: 2024-04-22 17:19:26.106130991 +0000 UTC m=+153.613398839 (delta=80.165125ms)
	I0422 17:19:26.212481   30338 fix.go:200] guest clock delta is within tolerance: 80.165125ms
	I0422 17:19:26.212485   30338 start.go:83] releasing machines lock for "ha-025067-m03", held for 26.196987955s
	I0422 17:19:26.212501   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.212814   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:26.215926   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.216275   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.216299   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.218736   30338 out.go:177] * Found network options:
	I0422 17:19:26.220289   30338 out.go:177]   - NO_PROXY=192.168.39.22,192.168.39.56
	W0422 17:19:26.221805   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 17:19:26.221830   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:19:26.221851   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.222469   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.222671   30338 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:19:26.222777   30338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:19:26.222811   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	W0422 17:19:26.222917   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	W0422 17:19:26.222942   30338 proxy.go:119] fail to check proxy env: Error ip not in block
	I0422 17:19:26.223010   30338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:19:26.223035   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:19:26.225776   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226051   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226106   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.226145   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226316   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:26.226500   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.226586   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:26.226610   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:26.226678   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:26.226830   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:19:26.226908   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:26.227035   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:19:26.227200   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:19:26.227362   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:19:26.464942   30338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:19:26.472422   30338 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:19:26.472501   30338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:19:26.491058   30338 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 17:19:26.491084   30338 start.go:494] detecting cgroup driver to use...
	I0422 17:19:26.491170   30338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:19:26.509584   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:19:26.526690   30338 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:19:26.526748   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:19:26.543143   30338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:19:26.558862   30338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:19:26.686214   30338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:19:26.826319   30338 docker.go:233] disabling docker service ...
	I0422 17:19:26.826418   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:19:26.844632   30338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:19:26.859567   30338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:19:26.996620   30338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:19:27.123443   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:19:27.139044   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:19:27.159963   30338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:19:27.160017   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.171331   30338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:19:27.171402   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.183307   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.195182   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.207767   30338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:19:27.220048   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.232143   30338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.251630   30338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:19:27.262786   30338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:19:27.273390   30338 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 17:19:27.273448   30338 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 17:19:27.287468   30338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:19:27.297408   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:19:27.411513   30338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:19:27.558913   30338 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:19:27.558988   30338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:19:27.564023   30338 start.go:562] Will wait 60s for crictl version
	I0422 17:19:27.564072   30338 ssh_runner.go:195] Run: which crictl
	I0422 17:19:27.568132   30338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:19:27.607546   30338 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:19:27.607635   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:19:27.636210   30338 ssh_runner.go:195] Run: crio --version
	I0422 17:19:27.669693   30338 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:19:27.671231   30338 out.go:177]   - env NO_PROXY=192.168.39.22
	I0422 17:19:27.672698   30338 out.go:177]   - env NO_PROXY=192.168.39.22,192.168.39.56
	I0422 17:19:27.673944   30338 main.go:141] libmachine: (ha-025067-m03) Calling .GetIP
	I0422 17:19:27.676893   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:27.677358   30338 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:19:27.677378   30338 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:19:27.677614   30338 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:19:27.682091   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:19:27.695805   30338 mustload.go:65] Loading cluster: ha-025067
	I0422 17:19:27.696020   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:27.696262   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:27.696297   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:27.710954   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0422 17:19:27.711421   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:27.711967   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:27.711994   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:27.712305   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:27.712501   30338 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:19:27.714037   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:19:27.714312   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:27.714356   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:27.730385   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0422 17:19:27.730803   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:27.731269   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:27.731292   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:27.731556   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:27.731728   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:19:27.731925   30338 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.220
	I0422 17:19:27.731938   30338 certs.go:194] generating shared ca certs ...
	I0422 17:19:27.731951   30338 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:19:27.732064   30338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:19:27.732100   30338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:19:27.732109   30338 certs.go:256] generating profile certs ...
	I0422 17:19:27.732172   30338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:19:27.732202   30338 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b
	I0422 17:19:27.732215   30338 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.56 192.168.39.220 192.168.39.254]
	I0422 17:19:27.884238   30338 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b ...
	I0422 17:19:27.884271   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b: {Name:mkf8a1a5c9798bf319c88d21c1edd7b4d37d492a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:19:27.884442   30338 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b ...
	I0422 17:19:27.884455   30338 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b: {Name:mkbc4ef4912eb3022a46d9eb81eca9c84bc0f030 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:19:27.884522   30338 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.7f67eb3b -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:19:27.884645   30338 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.7f67eb3b -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:19:27.884764   30338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:19:27.884780   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:19:27.884792   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:19:27.884806   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:19:27.884818   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:19:27.884831   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:19:27.884846   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:19:27.884860   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:19:27.884871   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:19:27.884917   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:19:27.884943   30338 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:19:27.884953   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:19:27.884977   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:19:27.884997   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:19:27.885018   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:19:27.885055   30338 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:19:27.885079   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:19:27.885093   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:27.885105   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:19:27.885142   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:19:27.888593   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:27.889027   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:19:27.889061   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:27.889210   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:19:27.889475   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:19:27.889649   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:19:27.889877   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:19:27.967607   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0422 17:19:27.977637   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0422 17:19:27.990775   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0422 17:19:27.995655   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0422 17:19:28.008873   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0422 17:19:28.013917   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0422 17:19:28.027340   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0422 17:19:28.032136   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0422 17:19:28.048584   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0422 17:19:28.054035   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0422 17:19:28.067212   30338 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0422 17:19:28.072616   30338 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0422 17:19:28.085764   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:19:28.114423   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:19:28.140780   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:19:28.167423   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:19:28.193709   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0422 17:19:28.220501   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 17:19:28.247527   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:19:28.273706   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:19:28.300216   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:19:28.327833   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:19:28.354462   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:19:28.379883   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0422 17:19:28.397684   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0422 17:19:28.415952   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0422 17:19:28.433985   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0422 17:19:28.452588   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0422 17:19:28.470942   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0422 17:19:28.489473   30338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0422 17:19:28.506839   30338 ssh_runner.go:195] Run: openssl version
	I0422 17:19:28.512969   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:19:28.524445   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:28.529289   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:28.529355   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:19:28.535616   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:19:28.548283   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:19:28.560303   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:19:28.565142   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:19:28.565203   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:19:28.571467   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:19:28.584022   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:19:28.596018   30338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:19:28.600700   30338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:19:28.600757   30338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:19:28.607201   30338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:19:28.619523   30338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:19:28.623761   30338 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 17:19:28.623816   30338 kubeadm.go:928] updating node {m03 192.168.39.220 8443 v1.30.0 crio true true} ...
	I0422 17:19:28.623953   30338 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:19:28.623980   30338 kube-vip.go:111] generating kube-vip config ...
	I0422 17:19:28.624011   30338 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:19:28.642523   30338 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:19:28.642584   30338 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:19:28.642637   30338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:19:28.653833   30338 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0422 17:19:28.653900   30338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0422 17:19:28.664803   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0422 17:19:28.664821   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0422 17:19:28.664840   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:19:28.664851   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:19:28.664914   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0422 17:19:28.664915   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0422 17:19:28.664803   30338 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0422 17:19:28.665030   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:19:28.681947   30338 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:19:28.681991   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0422 17:19:28.682024   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0422 17:19:28.682048   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0422 17:19:28.682069   30338 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0422 17:19:28.682085   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0422 17:19:28.706934   30338 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0422 17:19:28.706984   30338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/linux/amd64/v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0422 17:19:29.663184   30338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0422 17:19:29.672878   30338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0422 17:19:29.690607   30338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:19:29.709068   30338 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 17:19:29.727623   30338 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:19:29.732629   30338 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 17:19:29.746738   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:19:29.872016   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:19:29.893549   30338 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:19:29.894001   30338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:19:29.894057   30338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:19:29.910553   30338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0422 17:19:29.911074   30338 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:19:29.911602   30338 main.go:141] libmachine: Using API Version  1
	I0422 17:19:29.911625   30338 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:19:29.911973   30338 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:19:29.912143   30338 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:19:29.912287   30338 start.go:316] joinCluster: &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cluster
Name:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:19:29.912394   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0422 17:19:29.912409   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:19:29.915475   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:29.915931   30338 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:19:29.915953   30338 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:19:29.916128   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:19:29.916319   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:19:29.916483   30338 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:19:29.916652   30338 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:19:30.092391   30338 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:19:30.092442   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ziympr.v422sc69tns5sjqw --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443"
	I0422 17:19:55.530146   30338 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ziympr.v422sc69tns5sjqw --discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-025067-m03 --control-plane --apiserver-advertise-address=192.168.39.220 --apiserver-bind-port=8443": (25.437672291s)
	I0422 17:19:55.530187   30338 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0422 17:19:56.137720   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-025067-m03 minikube.k8s.io/updated_at=2024_04_22T17_19_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=ha-025067 minikube.k8s.io/primary=false
	I0422 17:19:56.274126   30338 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-025067-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0422 17:19:56.386379   30338 start.go:318] duration metric: took 26.474086213s to joinCluster
	I0422 17:19:56.386462   30338 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 17:19:56.388355   30338 out.go:177] * Verifying Kubernetes components...
	I0422 17:19:56.386850   30338 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:19:56.389912   30338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:19:56.626564   30338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:19:56.706969   30338 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:19:56.707279   30338 kapi.go:59] client config for ha-025067: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0422 17:19:56.707347   30338 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.22:8443
	I0422 17:19:56.707510   30338 node_ready.go:35] waiting up to 6m0s for node "ha-025067-m03" to be "Ready" ...
	I0422 17:19:56.707573   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:56.707580   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:56.707588   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:56.707595   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:56.711622   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:57.208591   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:57.208614   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:57.208622   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:57.208626   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:57.213222   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:57.707933   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:57.707955   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:57.707963   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:57.707967   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:57.711533   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:19:58.208333   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:58.208356   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:58.208364   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:58.208369   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:58.212414   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:58.708556   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:58.708585   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:58.708593   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:58.708599   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:58.712758   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:58.713433   30338 node_ready.go:53] node "ha-025067-m03" has status "Ready":"False"
	I0422 17:19:59.208425   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:59.208448   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:59.208456   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:59.208460   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:59.212570   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:19:59.708399   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:19:59.708419   30338 round_trippers.go:469] Request Headers:
	I0422 17:19:59.708426   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:19:59.708430   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:19:59.712589   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:00.208371   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:00.208394   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:00.208401   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:00.208406   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:00.212570   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:00.708399   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:00.708423   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:00.708433   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:00.708453   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:00.714064   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:00.715541   30338 node_ready.go:53] node "ha-025067-m03" has status "Ready":"False"
	I0422 17:20:01.208459   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:01.208482   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:01.208490   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:01.208493   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:01.212806   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:01.707796   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:01.707823   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:01.707835   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:01.707841   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:01.712135   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:02.208390   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:02.208412   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:02.208420   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:02.208424   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:02.212116   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:02.708431   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:02.708456   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:02.708465   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:02.708470   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:02.712114   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:03.208156   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:03.208179   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:03.208186   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:03.208190   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:03.211922   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:03.212519   30338 node_ready.go:53] node "ha-025067-m03" has status "Ready":"False"
	I0422 17:20:03.707878   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:03.707901   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:03.707908   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:03.707912   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:03.711494   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:04.208067   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:04.208092   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.208099   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.208103   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.211686   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:04.212498   30338 node_ready.go:49] node "ha-025067-m03" has status "Ready":"True"
	I0422 17:20:04.212517   30338 node_ready.go:38] duration metric: took 7.504994536s for node "ha-025067-m03" to be "Ready" ...
	I0422 17:20:04.212525   30338 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:20:04.212580   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:04.212589   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.212597   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.212600   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.219657   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:04.227271   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.227361   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-nswqp
	I0422 17:20:04.227372   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.227379   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.227384   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.230634   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:04.231363   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:04.231378   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.231388   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.231395   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.234301   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.234925   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.234949   30338 pod_ready.go:81] duration metric: took 7.651097ms for pod "coredns-7db6d8ff4d-nswqp" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.234963   30338 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.235028   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vrl4h
	I0422 17:20:04.235040   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.235050   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.235055   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.237846   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.238531   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:04.238550   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.238560   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.238565   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.241514   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.242223   30338 pod_ready.go:92] pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.242244   30338 pod_ready.go:81] duration metric: took 7.272849ms for pod "coredns-7db6d8ff4d-vrl4h" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.242257   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.242322   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067
	I0422 17:20:04.242337   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.242347   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.242355   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.244701   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.245379   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:04.245397   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.245406   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.245411   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.247922   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.248378   30338 pod_ready.go:92] pod "etcd-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.248399   30338 pod_ready.go:81] duration metric: took 6.128387ms for pod "etcd-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.248411   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.248466   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m02
	I0422 17:20:04.248477   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.248486   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.248496   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.251437   30338 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0422 17:20:04.252256   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:04.252271   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.252278   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.252284   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.260618   30338 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0422 17:20:04.261155   30338 pod_ready.go:92] pod "etcd-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:04.261173   30338 pod_ready.go:81] duration metric: took 12.753655ms for pod "etcd-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.261186   30338 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:04.408581   30338 request.go:629] Waited for 147.316449ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:04.408644   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:04.408653   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.408663   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.408671   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.412815   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:04.608462   30338 request.go:629] Waited for 195.048242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:04.608529   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:04.608537   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.608546   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.608555   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.613464   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:04.808436   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:04.808461   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:04.808469   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:04.808473   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:04.812589   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:05.009058   30338 request.go:629] Waited for 195.465329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.009129   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.009136   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.009147   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.009152   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.013015   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:05.262095   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:05.262121   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.262130   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.262136   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.267529   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:05.408725   30338 request.go:629] Waited for 140.334553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.408806   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.408812   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.408819   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.408828   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.412651   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:05.761631   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/etcd-ha-025067-m03
	I0422 17:20:05.761652   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.761659   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.761663   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.765941   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:05.809069   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:05.809095   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:05.809102   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:05.809106   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:05.812854   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:05.813500   30338 pod_ready.go:92] pod "etcd-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:05.813523   30338 pod_ready.go:81] duration metric: took 1.552329799s for pod "etcd-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:05.813547   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.008971   30338 request.go:629] Waited for 195.359368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067
	I0422 17:20:06.009049   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067
	I0422 17:20:06.009056   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.009064   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.009071   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.014402   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:06.208895   30338 request.go:629] Waited for 193.410481ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:06.208964   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:06.208969   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.208976   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.208981   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.212765   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:06.213419   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:06.213436   30338 pod_ready.go:81] duration metric: took 399.882287ms for pod "kube-apiserver-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.213447   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.408596   30338 request.go:629] Waited for 195.065355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m02
	I0422 17:20:06.408660   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m02
	I0422 17:20:06.408666   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.408676   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.408687   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.414791   30338 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0422 17:20:06.608283   30338 request.go:629] Waited for 192.222584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:06.608340   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:06.608346   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.608353   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.608362   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.611724   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:06.612358   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:06.612374   30338 pod_ready.go:81] duration metric: took 398.921569ms for pod "kube-apiserver-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.612383   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:06.808569   30338 request.go:629] Waited for 196.119415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:06.808635   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:06.808640   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:06.808647   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:06.808652   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:06.812804   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:07.008863   30338 request.go:629] Waited for 195.374285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.008937   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.008945   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.008963   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.008990   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.013499   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:07.208457   30338 request.go:629] Waited for 95.340592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:07.208521   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:07.208526   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.208532   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.208537   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.212919   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:07.408233   30338 request.go:629] Waited for 194.383295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.408313   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.408321   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.408336   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.408346   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.411555   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:07.613411   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:07.613438   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.613449   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.613456   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.621109   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:07.808164   30338 request.go:629] Waited for 185.26956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.808255   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:07.808266   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:07.808277   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:07.808286   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:07.811932   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.113012   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-025067-m03
	I0422 17:20:08.113034   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.113043   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.113047   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.116472   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.208791   30338 request.go:629] Waited for 91.272542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:08.208890   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:08.208898   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.208906   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.208913   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.212727   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.213327   30338 pod_ready.go:92] pod "kube-apiserver-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:08.213347   30338 pod_ready.go:81] duration metric: took 1.600957094s for pod "kube-apiserver-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.213383   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.408870   30338 request.go:629] Waited for 195.4052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067
	I0422 17:20:08.408968   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067
	I0422 17:20:08.408975   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.408982   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.408986   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.412980   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.609144   30338 request.go:629] Waited for 195.365293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:08.609205   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:08.609212   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.609226   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.609238   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.613235   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:08.614307   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:08.614325   30338 pod_ready.go:81] duration metric: took 400.930846ms for pod "kube-controller-manager-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.614333   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:08.808511   30338 request.go:629] Waited for 194.114176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:20:08.808610   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m02
	I0422 17:20:08.808622   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:08.808630   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:08.808634   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:08.811957   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:09.009099   30338 request.go:629] Waited for 196.371859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.009187   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.009199   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.009209   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.009220   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.013088   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:09.013918   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:09.013935   30338 pod_ready.go:81] duration metric: took 399.595545ms for pod "kube-controller-manager-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.013944   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.208441   30338 request.go:629] Waited for 194.414374ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m03
	I0422 17:20:09.208496   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-025067-m03
	I0422 17:20:09.208501   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.208509   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.208513   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.214076   30338 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0422 17:20:09.408248   30338 request.go:629] Waited for 193.289304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:09.408321   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:09.408326   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.408332   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.408335   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.413024   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:09.413485   30338 pod_ready.go:92] pod "kube-controller-manager-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:09.413503   30338 pod_ready.go:81] duration metric: took 399.553039ms for pod "kube-controller-manager-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.413516   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.608590   30338 request.go:629] Waited for 195.014295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:20:09.608670   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dk5ww
	I0422 17:20:09.608682   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.608695   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.608704   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.612912   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:09.808095   30338 request.go:629] Waited for 194.32254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.808159   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:09.808166   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:09.808173   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:09.808177   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:09.811542   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:09.812043   30338 pod_ready.go:92] pod "kube-proxy-dk5ww" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:09.812061   30338 pod_ready.go:81] duration metric: took 398.537697ms for pod "kube-proxy-dk5ww" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:09.812074   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.008615   30338 request.go:629] Waited for 196.476057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:20:10.008715   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pf7cc
	I0422 17:20:10.008726   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.008737   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.008744   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.013332   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:10.208359   30338 request.go:629] Waited for 193.179588ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:10.208431   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:10.208442   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.208453   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.208462   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.216249   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:10.217026   30338 pod_ready.go:92] pod "kube-proxy-pf7cc" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:10.217047   30338 pod_ready.go:81] duration metric: took 404.966564ms for pod "kube-proxy-pf7cc" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.217055   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wsr9x" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.409006   30338 request.go:629] Waited for 191.869571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wsr9x
	I0422 17:20:10.409066   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wsr9x
	I0422 17:20:10.409071   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.409078   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.409085   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.412838   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:10.608857   30338 request.go:629] Waited for 195.390297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:10.608931   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:10.608943   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.608953   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.608960   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.612941   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:10.614350   30338 pod_ready.go:92] pod "kube-proxy-wsr9x" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:10.614367   30338 pod_ready.go:81] duration metric: took 397.302932ms for pod "kube-proxy-wsr9x" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.614376   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:10.808575   30338 request.go:629] Waited for 194.119598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:20:10.808658   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067
	I0422 17:20:10.808684   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:10.808695   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:10.808703   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:10.812493   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:11.008317   30338 request.go:629] Waited for 195.180211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:11.008418   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067
	I0422 17:20:11.008431   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.008442   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.008450   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.012464   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:11.014055   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:11.014072   30338 pod_ready.go:81] duration metric: took 399.690169ms for pod "kube-scheduler-ha-025067" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.014095   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.208140   30338 request.go:629] Waited for 193.972024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:20:11.208203   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m02
	I0422 17:20:11.208210   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.208220   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.208227   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.212964   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:11.408292   30338 request.go:629] Waited for 194.265102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:11.408362   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m02
	I0422 17:20:11.408367   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.408374   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.408379   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.412023   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:11.413083   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:11.413098   30338 pod_ready.go:81] duration metric: took 398.996648ms for pod "kube-scheduler-ha-025067-m02" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.413112   30338 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.608335   30338 request.go:629] Waited for 195.114356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m03
	I0422 17:20:11.608406   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-025067-m03
	I0422 17:20:11.608413   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.608424   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.608431   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.613255   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:11.808591   30338 request.go:629] Waited for 194.379878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:11.808643   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes/ha-025067-m03
	I0422 17:20:11.808648   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.808656   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.808659   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.813031   30338 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0422 17:20:11.813961   30338 pod_ready.go:92] pod "kube-scheduler-ha-025067-m03" in "kube-system" namespace has status "Ready":"True"
	I0422 17:20:11.813980   30338 pod_ready.go:81] duration metric: took 400.860086ms for pod "kube-scheduler-ha-025067-m03" in "kube-system" namespace to be "Ready" ...
	I0422 17:20:11.813994   30338 pod_ready.go:38] duration metric: took 7.601459476s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 17:20:11.814015   30338 api_server.go:52] waiting for apiserver process to appear ...
	I0422 17:20:11.814067   30338 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:20:11.830960   30338 api_server.go:72] duration metric: took 15.444458246s to wait for apiserver process to appear ...
	I0422 17:20:11.830989   30338 api_server.go:88] waiting for apiserver healthz status ...
	I0422 17:20:11.831012   30338 api_server.go:253] Checking apiserver healthz at https://192.168.39.22:8443/healthz ...
	I0422 17:20:11.835763   30338 api_server.go:279] https://192.168.39.22:8443/healthz returned 200:
	ok
	I0422 17:20:11.835834   30338 round_trippers.go:463] GET https://192.168.39.22:8443/version
	I0422 17:20:11.835842   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:11.835854   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:11.835861   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:11.836962   30338 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0422 17:20:11.837099   30338 api_server.go:141] control plane version: v1.30.0
	I0422 17:20:11.837122   30338 api_server.go:131] duration metric: took 6.125261ms to wait for apiserver health ...
	I0422 17:20:11.837132   30338 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 17:20:12.008533   30338 request.go:629] Waited for 171.326368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.008588   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.008593   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.008600   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.008605   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.016043   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:12.023997   30338 system_pods.go:59] 24 kube-system pods found
	I0422 17:20:12.024025   30338 system_pods.go:61] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:20:12.024030   30338 system_pods.go:61] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:20:12.024033   30338 system_pods.go:61] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:20:12.024043   30338 system_pods.go:61] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:20:12.024046   30338 system_pods.go:61] "etcd-ha-025067-m03" [991fbed5-cbd2-47f4-b6ed-6d5d8b90fc6f] Running
	I0422 17:20:12.024050   30338 system_pods.go:61] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:20:12.024057   30338 system_pods.go:61] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:20:12.024066   30338 system_pods.go:61] "kindnet-ztcgm" [8d90cd98-58d5-40bf-90fa-5098dd0ebed9] Running
	I0422 17:20:12.024084   30338 system_pods.go:61] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:20:12.024089   30338 system_pods.go:61] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:20:12.024095   30338 system_pods.go:61] "kube-apiserver-ha-025067-m03" [bb05295e-a36d-496c-ba52-427800a5e567] Running
	I0422 17:20:12.024104   30338 system_pods.go:61] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:20:12.024108   30338 system_pods.go:61] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:20:12.024115   30338 system_pods.go:61] "kube-controller-manager-ha-025067-m03" [122ddb06-24df-4fd0-b1fb-e9168ff5d3ba] Running
	I0422 17:20:12.024118   30338 system_pods.go:61] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:20:12.024121   30338 system_pods.go:61] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:20:12.024124   30338 system_pods.go:61] "kube-proxy-wsr9x" [fafeef7d-736f-4aa2-88a9-1a8ee00af204] Running
	I0422 17:20:12.024128   30338 system_pods.go:61] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:20:12.024133   30338 system_pods.go:61] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:20:12.024139   30338 system_pods.go:61] "kube-scheduler-ha-025067-m03" [1c9bea0c-edac-4cd7-85d9-cc9b23ced6f3] Running
	I0422 17:20:12.024142   30338 system_pods.go:61] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:20:12.024145   30338 system_pods.go:61] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:20:12.024148   30338 system_pods.go:61] "kube-vip-ha-025067-m03" [bf7d3c98-811f-450f-8764-76d0b87175bd] Running
	I0422 17:20:12.024154   30338 system_pods.go:61] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:20:12.024161   30338 system_pods.go:74] duration metric: took 187.022358ms to wait for pod list to return data ...
	I0422 17:20:12.024174   30338 default_sa.go:34] waiting for default service account to be created ...
	I0422 17:20:12.208594   30338 request.go:629] Waited for 184.345038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:20:12.208668   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/default/serviceaccounts
	I0422 17:20:12.208673   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.208689   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.208699   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.211945   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:12.212074   30338 default_sa.go:45] found service account: "default"
	I0422 17:20:12.212090   30338 default_sa.go:55] duration metric: took 187.905867ms for default service account to be created ...
	I0422 17:20:12.212099   30338 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 17:20:12.408838   30338 request.go:629] Waited for 196.639234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.408919   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/namespaces/kube-system/pods
	I0422 17:20:12.408929   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.408939   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.408953   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.416098   30338 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0422 17:20:12.424212   30338 system_pods.go:86] 24 kube-system pods found
	I0422 17:20:12.424244   30338 system_pods.go:89] "coredns-7db6d8ff4d-nswqp" [bedfb6c0-6553-4ec2-9318-d1997a2994e7] Running
	I0422 17:20:12.424251   30338 system_pods.go:89] "coredns-7db6d8ff4d-vrl4h" [9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8] Running
	I0422 17:20:12.424258   30338 system_pods.go:89] "etcd-ha-025067" [e5f2c5e2-d3e0-4d90-b7f8-d223ff6d1884] Running
	I0422 17:20:12.424264   30338 system_pods.go:89] "etcd-ha-025067-m02" [93ed2373-8f12-411c-a5ac-25fd73622827] Running
	I0422 17:20:12.424270   30338 system_pods.go:89] "etcd-ha-025067-m03" [991fbed5-cbd2-47f4-b6ed-6d5d8b90fc6f] Running
	I0422 17:20:12.424276   30338 system_pods.go:89] "kindnet-ctdzp" [36712dec-8183-45d7-88e1-a8808ea89975] Running
	I0422 17:20:12.424282   30338 system_pods.go:89] "kindnet-tmxd9" [0d448df8-32a2-46e8-bcbf-fac5d147e45f] Running
	I0422 17:20:12.424288   30338 system_pods.go:89] "kindnet-ztcgm" [8d90cd98-58d5-40bf-90fa-5098dd0ebed9] Running
	I0422 17:20:12.424294   30338 system_pods.go:89] "kube-apiserver-ha-025067" [c9012c4d-b4d1-47ea-acdb-687127fadec1] Running
	I0422 17:20:12.424300   30338 system_pods.go:89] "kube-apiserver-ha-025067-m02" [ab377464-cc66-47e6-80ef-f99f830a8c20] Running
	I0422 17:20:12.424308   30338 system_pods.go:89] "kube-apiserver-ha-025067-m03" [bb05295e-a36d-496c-ba52-427800a5e567] Running
	I0422 17:20:12.424315   30338 system_pods.go:89] "kube-controller-manager-ha-025067" [b16823d1-8223-4a25-8a50-f7593984508a] Running
	I0422 17:20:12.424325   30338 system_pods.go:89] "kube-controller-manager-ha-025067-m02" [e11d6d4a-ed87-459e-9665-edee307a967b] Running
	I0422 17:20:12.424333   30338 system_pods.go:89] "kube-controller-manager-ha-025067-m03" [122ddb06-24df-4fd0-b1fb-e9168ff5d3ba] Running
	I0422 17:20:12.424341   30338 system_pods.go:89] "kube-proxy-dk5ww" [227acc0a-e74c-4119-8968-8082dba031cf] Running
	I0422 17:20:12.424354   30338 system_pods.go:89] "kube-proxy-pf7cc" [4de4d571-9b5a-43ae-9808-4dbf5d1a5e26] Running
	I0422 17:20:12.424360   30338 system_pods.go:89] "kube-proxy-wsr9x" [fafeef7d-736f-4aa2-88a9-1a8ee00af204] Running
	I0422 17:20:12.424367   30338 system_pods.go:89] "kube-scheduler-ha-025067" [1ddbd09c-9549-418e-aa7d-8ac93111cc78] Running
	I0422 17:20:12.424374   30338 system_pods.go:89] "kube-scheduler-ha-025067-m02" [1f50ea2e-ea95-4512-8731-891549fe25ee] Running
	I0422 17:20:12.424384   30338 system_pods.go:89] "kube-scheduler-ha-025067-m03" [1c9bea0c-edac-4cd7-85d9-cc9b23ced6f3] Running
	I0422 17:20:12.424391   30338 system_pods.go:89] "kube-vip-ha-025067" [8c381060-83d4-411b-98ac-c6b1842cd3d8] Running
	I0422 17:20:12.424402   30338 system_pods.go:89] "kube-vip-ha-025067-m02" [0edd52d9-9b97-4681-939e-120b0c6bdd7e] Running
	I0422 17:20:12.424408   30338 system_pods.go:89] "kube-vip-ha-025067-m03" [bf7d3c98-811f-450f-8764-76d0b87175bd] Running
	I0422 17:20:12.424414   30338 system_pods.go:89] "storage-provisioner" [68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b] Running
	I0422 17:20:12.424426   30338 system_pods.go:126] duration metric: took 212.316904ms to wait for k8s-apps to be running ...
	I0422 17:20:12.424438   30338 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 17:20:12.424487   30338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:20:12.441136   30338 system_svc.go:56] duration metric: took 16.689409ms WaitForService to wait for kubelet
	I0422 17:20:12.441183   30338 kubeadm.go:576] duration metric: took 16.054683836s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:20:12.441205   30338 node_conditions.go:102] verifying NodePressure condition ...
	I0422 17:20:12.608837   30338 request.go:629] Waited for 167.557346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.22:8443/api/v1/nodes
	I0422 17:20:12.608887   30338 round_trippers.go:463] GET https://192.168.39.22:8443/api/v1/nodes
	I0422 17:20:12.608892   30338 round_trippers.go:469] Request Headers:
	I0422 17:20:12.608900   30338 round_trippers.go:473]     Accept: application/json, */*
	I0422 17:20:12.608903   30338 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0422 17:20:12.612754   30338 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0422 17:20:12.613857   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:20:12.613878   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:20:12.613889   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:20:12.613892   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:20:12.613896   30338 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 17:20:12.613899   30338 node_conditions.go:123] node cpu capacity is 2
	I0422 17:20:12.613902   30338 node_conditions.go:105] duration metric: took 172.692667ms to run NodePressure ...
	I0422 17:20:12.613913   30338 start.go:240] waiting for startup goroutines ...
	I0422 17:20:12.613930   30338 start.go:254] writing updated cluster config ...
	I0422 17:20:12.614248   30338 ssh_runner.go:195] Run: rm -f paused
	I0422 17:20:12.664197   30338 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 17:20:12.666233   30338 out.go:177] * Done! kubectl is now configured to use "ha-025067" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.591495241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806675591472093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=013afdb0-2bf3-4310-a876-c68613dd6411 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.592332171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68c9dff2-459c-4001-ab28-7cd98c65284e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.592406671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68c9dff2-459c-4001-ab28-7cd98c65284e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.593824156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68c9dff2-459c-4001-ab28-7cd98c65284e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.640424028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8ba9eb4-1d99-47b1-a8f7-24ced166cb50 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.640531123Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8ba9eb4-1d99-47b1-a8f7-24ced166cb50 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.641952450Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c8f3f84-a9a5-44a9-9625-628faedd7cad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.642562846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806675642535650,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c8f3f84-a9a5-44a9-9625-628faedd7cad name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.643178145Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=864f3175-be41-4cea-9def-f56cd1f58ec9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.643257362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=864f3175-be41-4cea-9def-f56cd1f58ec9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.643499956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=864f3175-be41-4cea-9def-f56cd1f58ec9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.685939329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ec39a0e-fcb7-4c7d-a54e-fb65d3cda864 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.686021055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ec39a0e-fcb7-4c7d-a54e-fb65d3cda864 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.687785365Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4d80a4c-ea6d-4d94-8eb1-4ed15bfb5db4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.688336159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806675688310967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4d80a4c-ea6d-4d94-8eb1-4ed15bfb5db4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.688985823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0714850a-b384-4a70-b279-f701ed490802 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.689088330Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0714850a-b384-4a70-b279-f701ed490802 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.689328874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0714850a-b384-4a70-b279-f701ed490802 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.732228878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35372b63-3be0-4b8c-8421-f49d4c566d71 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.732302742Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35372b63-3be0-4b8c-8421-f49d4c566d71 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.733632443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1c0b235-48d4-408e-a6fa-f8adf833ce0d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.734095578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713806675734020966,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1c0b235-48d4-408e-a6fa-f8adf833ce0d name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.734887894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=131f2ab0-8ace-4ca4-af6a-aedaef8ea01a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.734959874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=131f2ab0-8ace-4ca4-af6a-aedaef8ea01a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:24:35 ha-025067 crio[684]: time="2024-04-22 17:24:35.735320072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806416877732555,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d608f1d9901992c53482f71b3f587c7a95cb733f1b526137e409395d19823570,PodSandboxId:9f37c522b34de51b23edf3ce153b1b945aa881ef35904a6adc64e6bc79fbc903,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806270846734433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 0,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270540813806,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806270545296422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9df
d-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9,PodSandboxId:20bb53838ad91642d644775254437043c68999c47196e36d8e54563c5a227cdf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:171380626
8479796632,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806268358711287,Labels:map[string]string
{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3,PodSandboxId:d34272323a0f8e820f3d53a3996a578eee13617c61af54d05bfd8ccdadfdc84e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806250846697834,Labels:map[string]string{i
o.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a734ee4ab85ed101d0ef67cd65d88766,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806248146739145,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubern
etes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158,PodSandboxId:a499e1bb77c00130e5d4847b8a49fd98b0a9e8a05babfc87c6946db4f98460db,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806248053717972,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name
: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806248056983647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sched
uler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b,PodSandboxId:8c39dcc79583cc2097a5e9586a036f6777c0ec386639cd4d171018ac6eadb4bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806248031548375,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kub
ernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=131f2ab0-8ace-4ca4-af6a-aedaef8ea01a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	983cb8537237f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   3c3abb6c214d4       busybox-fc5497c4f-l97ld
	d608f1d990199       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   9f37c522b34de       storage-provisioner
	c0af820e7bd06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   c2921baac16b3       coredns-7db6d8ff4d-vrl4h
	524e02d80347d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   b553b11bb990b       coredns-7db6d8ff4d-nswqp
	e792653200952       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   20bb53838ad91       kindnet-tmxd9
	f841dcb8dd09b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      6 minutes ago       Running             kube-proxy                0                   052596614cf9c       kube-proxy-pf7cc
	ce4c01cd6ca70       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   d34272323a0f8       kube-vip-ha-025067
	b3d751e3e8f50       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   3c34eb37cd442       etcd-ha-025067
	549930f1d83f6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      7 minutes ago       Running             kube-scheduler            0                   c0ff0dbc27bbd       kube-scheduler-ha-025067
	819e895185838       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      7 minutes ago       Running             kube-controller-manager   0                   a499e1bb77c00       kube-controller-manager-ha-025067
	9bc987b1519c5       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      7 minutes ago       Running             kube-apiserver            0                   8c39dcc79583c       kube-apiserver-ha-025067
	
	
	==> coredns [524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0] <==
	[INFO] 10.244.0.4:52803 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000122922s
	[INFO] 10.244.0.4:45587 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164214s
	[INFO] 10.244.0.4:36350 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134111s
	[INFO] 10.244.1.2:56300 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001818553s
	[INFO] 10.244.1.2:58403 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100106s
	[INFO] 10.244.1.2:49747 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094083s
	[INFO] 10.244.1.2:39851 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094869s
	[INFO] 10.244.1.2:51921 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132016s
	[INFO] 10.244.2.2:46485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151891s
	[INFO] 10.244.2.2:52343 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183731s
	[INFO] 10.244.2.2:36982 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162215s
	[INFO] 10.244.2.2:56193 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471319s
	[INFO] 10.244.2.2:48503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072359s
	[INFO] 10.244.2.2:35429 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006794s
	[INFO] 10.244.2.2:56484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092002s
	[INFO] 10.244.0.4:39516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189987s
	[INFO] 10.244.0.4:60228 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082728s
	[INFO] 10.244.1.2:44703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203159s
	[INFO] 10.244.1.2:33524 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167155s
	[INFO] 10.244.1.2:43201 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098618s
	[INFO] 10.244.2.2:53563 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215578s
	[INFO] 10.244.2.2:54616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163304s
	[INFO] 10.244.0.4:49280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092142s
	[INFO] 10.244.1.2:40544 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116574s
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249064s
	
	
	==> coredns [c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55] <==
	[INFO] 10.244.1.2:60175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001836146s
	[INFO] 10.244.2.2:52744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012764s
	[INFO] 10.244.2.2:37678 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001635715s
	[INFO] 10.244.0.4:33703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000230709s
	[INFO] 10.244.0.4:60463 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000233694s
	[INFO] 10.244.0.4:44231 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.015736347s
	[INFO] 10.244.0.4:37322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115326s
	[INFO] 10.244.1.2:58538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135694s
	[INFO] 10.244.1.2:51828 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153493s
	[INFO] 10.244.1.2:44556 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001447535s
	[INFO] 10.244.2.2:44901 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139485s
	[INFO] 10.244.0.4:42667 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108865s
	[INFO] 10.244.0.4:54399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073213s
	[INFO] 10.244.1.2:35127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090826s
	[INFO] 10.244.2.2:52722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185046s
	[INFO] 10.244.2.2:49596 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128238s
	[INFO] 10.244.0.4:59309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125541s
	[INFO] 10.244.0.4:42344 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215786s
	[INFO] 10.244.0.4:34084 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000295612s
	[INFO] 10.244.1.2:50561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016924s
	[INFO] 10.244.1.2:40185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080135s
	[INFO] 10.244.1.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083107s
	[INFO] 10.244.2.2:52310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147992s
	[INFO] 10.244.2.2:48499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103149s
	[INFO] 10.244.2.2:60500 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018474s
	
	
	==> describe nodes <==
	Name:               ha-025067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:17:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:24:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:20:37 +0000   Mon, 22 Apr 2024 17:17:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-025067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73a664449fd9403194a5919e23b0871b
	  System UUID:                73a66444-9fd9-4031-94a5-919e23b0871b
	  Boot ID:                    4c2ace2e-318b-4b8f-bd1e-a5f6d5151f88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l97ld              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 coredns-7db6d8ff4d-nswqp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m49s
	  kube-system                 coredns-7db6d8ff4d-vrl4h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m49s
	  kube-system                 etcd-ha-025067                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m2s
	  kube-system                 kindnet-tmxd9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m49s
	  kube-system                 kube-apiserver-ha-025067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-controller-manager-ha-025067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-proxy-pf7cc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-scheduler-ha-025067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 kube-vip-ha-025067                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m2s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m47s                kube-proxy       
	  Normal  Starting                 7m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m2s (x2 over 7m2s)  kubelet          Node ha-025067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m2s (x2 over 7m2s)  kubelet          Node ha-025067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m2s (x2 over 7m2s)  kubelet          Node ha-025067 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m49s                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal  NodeReady                6m47s                kubelet          Node ha-025067 status is now: NodeReady
	  Normal  RegisteredNode           5m40s                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal  RegisteredNode           4m25s                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	
	
	Name:               ha-025067-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_18_41_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:18:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:21:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 17:20:41 +0000   Mon, 22 Apr 2024 17:22:06 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-025067-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1f034a156f4a3fb9cb79780785386e
	  System UUID:                8a1f034a-156f-4a3f-b9cb-79780785386e
	  Boot ID:                    f3fb9e45-42b6-4f46-ad83-f76ee2a3cbe3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m6qxt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 etcd-ha-025067-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m56s
	  kube-system                 kindnet-ctdzp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-apiserver-ha-025067-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-controller-manager-ha-025067-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-proxy-dk5ww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-scheduler-ha-025067-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-vip-ha-025067-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node ha-025067-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m54s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  NodeNotReady             2m30s                  node-controller  Node ha-025067-m02 status is now: NodeNotReady
	
	
	Name:               ha-025067-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_19_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:19:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:24:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:19:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:19:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:19:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:20:23 +0000   Mon, 22 Apr 2024 17:20:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-025067-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 300afc7a045c4fd490327eb7452e4f8c
	  System UUID:                300afc7a-045c-4fd4-9032-7eb7452e4f8c
	  Boot ID:                    d51c7e9b-22eb-41ed-8a76-3c0480ae4c87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvcmk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 etcd-ha-025067-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kindnet-ztcgm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m43s
	  kube-system                 kube-apiserver-ha-025067-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-ha-025067-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-proxy-wsr9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-scheduler-ha-025067-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-vip-ha-025067-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s (x8 over 4m44s)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s (x8 over 4m44s)  kubelet          Node ha-025067-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s (x7 over 4m44s)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal  RegisteredNode           4m39s                  node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal  RegisteredNode           4m25s                  node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	
	
	Name:               ha-025067-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_20_51_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:20:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:24:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:20:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:20:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:20:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:21:22 +0000   Mon, 22 Apr 2024 17:21:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-025067-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfe8f8092cda4851adcca8410e5437c9
	  System UUID:                bfe8f809-2cda-4851-adcc-a8410e5437c9
	  Boot ID:                    9233437f-4ac9-4a5c-8bc3-15be3e575746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d6tpm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m45s
	  kube-system                 kube-proxy-kbhbk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m46s (x2 over 3m46s)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m46s (x2 over 3m46s)  kubelet          Node ha-025067-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m46s (x2 over 3m46s)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m44s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal  NodeReady                3m34s                  kubelet          Node ha-025067-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr22 17:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055893] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040230] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr22 17:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.880735] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.629211] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.717166] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.065948] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064117] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195469] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.121015] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285063] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.448511] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059431] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.181808] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.968933] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.287359] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.083571] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.934890] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 17:18] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f] <==
	{"level":"warn","ts":"2024-04-22T17:24:36.036626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.041986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.05345Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.055786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.090748Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.123431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.129293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.133368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.139275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.154389Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.164743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.174092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.177579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.181881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.1928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.200353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.208954Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.213648Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.214695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.218699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.225297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.230219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.234698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.245217Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-22T17:24:36.281289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"cde0bb267fc4e559","from":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:24:36 up 7 min,  0 users,  load average: 0.26, 0.15, 0.06
	Linux ha-025067 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [e792653200952db550fa78eaf2635e4828b869acf415379a10b1f01d7b9b24f9] <==
	I0422 17:23:59.994947       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:24:10.009414       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:24:10.009456       1 main.go:227] handling current node
	I0422 17:24:10.009468       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:24:10.009475       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:24:10.009583       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:24:10.009612       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:24:10.009662       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:24:10.009692       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:24:20.021797       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:24:20.021841       1 main.go:227] handling current node
	I0422 17:24:20.021857       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:24:20.021866       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:24:20.022014       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:24:20.022099       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:24:20.022201       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:24:20.022234       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:24:30.036125       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:24:30.036175       1 main.go:227] handling current node
	I0422 17:24:30.036193       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:24:30.036203       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:24:30.036353       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:24:30.036389       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:24:30.036467       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:24:30.036503       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b] <==
	E0422 17:17:34.274861       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"client disconnected"}: client disconnected
	E0422 17:17:34.274994       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0422 17:17:34.276119       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0422 17:17:34.276157       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0422 17:17:34.277898       1 timeout.go:142] post-timeout activity - time-elapsed: 2.971164ms, POST "/api/v1/namespaces/default/events" result: <nil>
	I0422 17:17:34.322005       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 17:17:34.343471       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0422 17:17:34.493863       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 17:17:47.670486       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0422 17:17:47.766389       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0422 17:20:17.847358       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57020: use of closed network connection
	E0422 17:20:18.052687       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57034: use of closed network connection
	E0422 17:20:18.268726       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57052: use of closed network connection
	E0422 17:20:18.484851       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57078: use of closed network connection
	E0422 17:20:18.702901       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57092: use of closed network connection
	E0422 17:20:18.897172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57098: use of closed network connection
	E0422 17:20:19.097803       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57116: use of closed network connection
	E0422 17:20:19.305950       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57130: use of closed network connection
	E0422 17:20:19.498927       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57146: use of closed network connection
	E0422 17:20:19.836096       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57174: use of closed network connection
	E0422 17:20:20.040429       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57194: use of closed network connection
	E0422 17:20:20.265570       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57198: use of closed network connection
	E0422 17:20:20.450583       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57218: use of closed network connection
	E0422 17:20:20.848329       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:57248: use of closed network connection
	W0422 17:21:32.941804       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.220]
	
	
	==> kube-controller-manager [819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158] <==
	I0422 17:18:42.675807       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m02"
	I0422 17:19:52.993162       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-025067-m03\" does not exist"
	I0422 17:19:53.024303       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-025067-m03" podCIDRs=["10.244.2.0/24"]
	I0422 17:19:57.730824       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m03"
	I0422 17:20:13.672920       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.629553ms"
	I0422 17:20:13.845423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="171.979607ms"
	I0422 17:20:14.133932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="288.190671ms"
	E0422 17:20:14.134006       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0422 17:20:14.199360       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.214623ms"
	I0422 17:20:14.199589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.252µs"
	I0422 17:20:14.479901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.946µs"
	I0422 17:20:17.125741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.752001ms"
	I0422 17:20:17.125869       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.335µs"
	I0422 17:20:17.199312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.549855ms"
	I0422 17:20:17.199449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="71.695µs"
	I0422 17:20:17.275434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.800596ms"
	I0422 17:20:17.293193       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.363µs"
	E0422 17:20:50.894840       1 certificate_controller.go:146] Sync csr-phw8q failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-phw8q": the object has been modified; please apply your changes to the latest version and try again
	I0422 17:20:51.171537       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-025067-m04\" does not exist"
	I0422 17:20:51.196798       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-025067-m04" podCIDRs=["10.244.3.0/24"]
	I0422 17:20:52.758431       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m04"
	I0422 17:21:02.204529       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-025067-m04"
	I0422 17:22:06.310467       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-025067-m04"
	I0422 17:22:06.409315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.463987ms"
	I0422 17:22:06.409587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.041µs"
	
	
	==> kube-proxy [f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72] <==
	I0422 17:17:48.726691       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:17:48.757347       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	I0422 17:17:48.861680       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:17:48.861739       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:17:48.861755       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:17:48.864675       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:17:48.865106       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:17:48.865139       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:17:48.866289       1 config.go:192] "Starting service config controller"
	I0422 17:17:48.866321       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:17:48.866341       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:17:48.866345       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:17:48.868358       1 config.go:319] "Starting node config controller"
	I0422 17:17:48.868391       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:17:48.968146       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:17:48.968207       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:17:48.969297       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89] <==
	W0422 17:17:31.189846       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:17:31.189993       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:17:31.189754       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 17:17:31.190224       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 17:17:32.005472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:17:32.005606       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:17:32.048534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 17:17:32.048691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 17:17:32.103231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 17:17:32.103388       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 17:17:32.162682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:17:32.162810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 17:17:32.328588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:17:32.328736       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:17:32.398176       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:17:32.398208       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 17:17:32.531181       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:17:32.531303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:17:32.744247       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:17:32.744909       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 17:17:34.780938       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 17:20:51.292468       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fjzpp\": pod kindnet-fjzpp is already assigned to node \"ha-025067-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fjzpp" node="ha-025067-m04"
	E0422 17:20:51.292673       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 528898c4-830d-4367-9bc3-59f41121702e(kube-system/kindnet-fjzpp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fjzpp"
	E0422 17:20:51.292706       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fjzpp\": pod kindnet-fjzpp is already assigned to node \"ha-025067-m04\"" pod="kube-system/kindnet-fjzpp"
	I0422 17:20:51.292734       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fjzpp" node="ha-025067-m04"
	
	
	==> kubelet <==
	Apr 22 17:20:34 ha-025067 kubelet[1368]: E0422 17:20:34.512670    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:20:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:20:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:20:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:20:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:21:34 ha-025067 kubelet[1368]: E0422 17:21:34.511429    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:21:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:21:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:21:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:21:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:22:34 ha-025067 kubelet[1368]: E0422 17:22:34.512373    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:22:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:22:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:22:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:22:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:23:34 ha-025067 kubelet[1368]: E0422 17:23:34.512577    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:23:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:23:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:23:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:23:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:24:34 ha-025067 kubelet[1368]: E0422 17:24:34.510603    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:24:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:24:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:24:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:24:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-025067 -n ha-025067
helpers_test.go:261: (dbg) Run:  kubectl --context ha-025067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (419.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-025067 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-025067 -v=7 --alsologtostderr
E0422 17:25:07.901953   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:26:19.002550   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:26:30.950463   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-025067 -v=7 --alsologtostderr: exit status 82 (2m1.993233629s)

                                                
                                                
-- stdout --
	* Stopping node "ha-025067-m04"  ...
	* Stopping node "ha-025067-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:24:37.775715   36480 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:24:37.775824   36480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:37.775828   36480 out.go:304] Setting ErrFile to fd 2...
	I0422 17:24:37.775832   36480 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:24:37.776041   36480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:24:37.776292   36480 out.go:298] Setting JSON to false
	I0422 17:24:37.776375   36480 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:37.776752   36480 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:37.776855   36480 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:24:37.777023   36480 mustload.go:65] Loading cluster: ha-025067
	I0422 17:24:37.777164   36480 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:24:37.777203   36480 stop.go:39] StopHost: ha-025067-m04
	I0422 17:24:37.777535   36480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:37.777587   36480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:37.792238   36480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44023
	I0422 17:24:37.792708   36480 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:37.793301   36480 main.go:141] libmachine: Using API Version  1
	I0422 17:24:37.793337   36480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:37.793652   36480 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:37.796067   36480 out.go:177] * Stopping node "ha-025067-m04"  ...
	I0422 17:24:37.797207   36480 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 17:24:37.797245   36480 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:24:37.797471   36480 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 17:24:37.797492   36480 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:24:37.799984   36480 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:37.800401   36480 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:20:36 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:24:37.800424   36480 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:24:37.800543   36480 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:24:37.800839   36480 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:24:37.801031   36480 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:24:37.801212   36480 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:24:37.887600   36480 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 17:24:37.942167   36480 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 17:24:37.997003   36480 main.go:141] libmachine: Stopping "ha-025067-m04"...
	I0422 17:24:37.997039   36480 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:37.998564   36480 main.go:141] libmachine: (ha-025067-m04) Calling .Stop
	I0422 17:24:38.002024   36480 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 0/120
	I0422 17:24:39.286398   36480 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:24:39.287576   36480 main.go:141] libmachine: Machine "ha-025067-m04" was stopped.
	I0422 17:24:39.287592   36480 stop.go:75] duration metric: took 1.490390461s to stop
	I0422 17:24:39.287609   36480 stop.go:39] StopHost: ha-025067-m03
	I0422 17:24:39.287880   36480 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:24:39.287918   36480 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:24:39.302537   36480 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
	I0422 17:24:39.302994   36480 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:24:39.303556   36480 main.go:141] libmachine: Using API Version  1
	I0422 17:24:39.303578   36480 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:24:39.303906   36480 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:24:39.306139   36480 out.go:177] * Stopping node "ha-025067-m03"  ...
	I0422 17:24:39.307741   36480 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 17:24:39.307769   36480 main.go:141] libmachine: (ha-025067-m03) Calling .DriverName
	I0422 17:24:39.308020   36480 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 17:24:39.308048   36480 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHHostname
	I0422 17:24:39.310959   36480 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:39.311437   36480 main.go:141] libmachine: (ha-025067-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d5:51:30", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:19:15 +0000 UTC Type:0 Mac:52:54:00:d5:51:30 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-025067-m03 Clientid:01:52:54:00:d5:51:30}
	I0422 17:24:39.311484   36480 main.go:141] libmachine: (ha-025067-m03) DBG | domain ha-025067-m03 has defined IP address 192.168.39.220 and MAC address 52:54:00:d5:51:30 in network mk-ha-025067
	I0422 17:24:39.311679   36480 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHPort
	I0422 17:24:39.311892   36480 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHKeyPath
	I0422 17:24:39.312088   36480 main.go:141] libmachine: (ha-025067-m03) Calling .GetSSHUsername
	I0422 17:24:39.312246   36480 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m03/id_rsa Username:docker}
	I0422 17:24:39.399422   36480 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 17:24:39.454753   36480 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 17:24:39.511219   36480 main.go:141] libmachine: Stopping "ha-025067-m03"...
	I0422 17:24:39.511245   36480 main.go:141] libmachine: (ha-025067-m03) Calling .GetState
	I0422 17:24:39.512896   36480 main.go:141] libmachine: (ha-025067-m03) Calling .Stop
	I0422 17:24:39.516220   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 0/120
	I0422 17:24:40.517718   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 1/120
	I0422 17:24:41.519202   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 2/120
	I0422 17:24:42.520510   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 3/120
	I0422 17:24:43.521872   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 4/120
	I0422 17:24:44.523238   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 5/120
	I0422 17:24:45.524979   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 6/120
	I0422 17:24:46.526357   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 7/120
	I0422 17:24:47.527991   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 8/120
	I0422 17:24:48.529191   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 9/120
	I0422 17:24:49.531258   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 10/120
	I0422 17:24:50.532689   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 11/120
	I0422 17:24:51.534339   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 12/120
	I0422 17:24:52.535765   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 13/120
	I0422 17:24:53.537529   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 14/120
	I0422 17:24:54.539761   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 15/120
	I0422 17:24:55.541603   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 16/120
	I0422 17:24:56.543469   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 17/120
	I0422 17:24:57.545183   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 18/120
	I0422 17:24:58.546836   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 19/120
	I0422 17:24:59.548942   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 20/120
	I0422 17:25:00.550609   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 21/120
	I0422 17:25:01.552352   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 22/120
	I0422 17:25:02.553937   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 23/120
	I0422 17:25:03.555500   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 24/120
	I0422 17:25:04.557194   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 25/120
	I0422 17:25:05.558445   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 26/120
	I0422 17:25:06.559823   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 27/120
	I0422 17:25:07.561194   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 28/120
	I0422 17:25:08.562537   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 29/120
	I0422 17:25:09.564402   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 30/120
	I0422 17:25:10.565730   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 31/120
	I0422 17:25:11.567068   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 32/120
	I0422 17:25:12.568328   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 33/120
	I0422 17:25:13.569903   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 34/120
	I0422 17:25:14.571456   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 35/120
	I0422 17:25:15.572863   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 36/120
	I0422 17:25:16.574274   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 37/120
	I0422 17:25:17.575780   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 38/120
	I0422 17:25:18.577486   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 39/120
	I0422 17:25:19.579386   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 40/120
	I0422 17:25:20.580789   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 41/120
	I0422 17:25:21.582172   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 42/120
	I0422 17:25:22.583493   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 43/120
	I0422 17:25:23.584884   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 44/120
	I0422 17:25:24.586254   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 45/120
	I0422 17:25:25.587860   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 46/120
	I0422 17:25:26.589211   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 47/120
	I0422 17:25:27.591306   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 48/120
	I0422 17:25:28.592634   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 49/120
	I0422 17:25:29.594653   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 50/120
	I0422 17:25:30.595935   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 51/120
	I0422 17:25:31.597544   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 52/120
	I0422 17:25:32.598924   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 53/120
	I0422 17:25:33.600541   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 54/120
	I0422 17:25:34.602494   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 55/120
	I0422 17:25:35.603829   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 56/120
	I0422 17:25:36.605191   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 57/120
	I0422 17:25:37.606540   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 58/120
	I0422 17:25:38.607897   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 59/120
	I0422 17:25:39.609649   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 60/120
	I0422 17:25:40.611089   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 61/120
	I0422 17:25:41.612465   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 62/120
	I0422 17:25:42.613916   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 63/120
	I0422 17:25:43.615501   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 64/120
	I0422 17:25:44.617167   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 65/120
	I0422 17:25:45.618799   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 66/120
	I0422 17:25:46.620546   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 67/120
	I0422 17:25:47.621965   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 68/120
	I0422 17:25:48.623291   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 69/120
	I0422 17:25:49.625115   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 70/120
	I0422 17:25:50.626734   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 71/120
	I0422 17:25:51.628047   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 72/120
	I0422 17:25:52.629896   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 73/120
	I0422 17:25:53.631201   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 74/120
	I0422 17:25:54.632712   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 75/120
	I0422 17:25:55.634166   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 76/120
	I0422 17:25:56.635850   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 77/120
	I0422 17:25:57.637687   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 78/120
	I0422 17:25:58.639458   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 79/120
	I0422 17:25:59.641630   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 80/120
	I0422 17:26:00.643291   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 81/120
	I0422 17:26:01.644741   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 82/120
	I0422 17:26:02.646514   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 83/120
	I0422 17:26:03.647989   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 84/120
	I0422 17:26:04.649984   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 85/120
	I0422 17:26:05.651360   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 86/120
	I0422 17:26:06.653795   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 87/120
	I0422 17:26:07.655153   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 88/120
	I0422 17:26:08.656433   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 89/120
	I0422 17:26:09.658602   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 90/120
	I0422 17:26:10.659785   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 91/120
	I0422 17:26:11.661310   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 92/120
	I0422 17:26:12.662620   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 93/120
	I0422 17:26:13.663947   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 94/120
	I0422 17:26:14.665716   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 95/120
	I0422 17:26:15.667248   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 96/120
	I0422 17:26:16.668680   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 97/120
	I0422 17:26:17.670206   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 98/120
	I0422 17:26:18.671794   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 99/120
	I0422 17:26:19.674229   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 100/120
	I0422 17:26:20.675820   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 101/120
	I0422 17:26:21.677373   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 102/120
	I0422 17:26:22.678915   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 103/120
	I0422 17:26:23.680476   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 104/120
	I0422 17:26:24.682572   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 105/120
	I0422 17:26:25.684778   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 106/120
	I0422 17:26:26.686153   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 107/120
	I0422 17:26:27.688282   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 108/120
	I0422 17:26:28.689800   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 109/120
	I0422 17:26:29.691491   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 110/120
	I0422 17:26:30.692916   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 111/120
	I0422 17:26:31.694385   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 112/120
	I0422 17:26:32.695897   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 113/120
	I0422 17:26:33.697381   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 114/120
	I0422 17:26:34.699367   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 115/120
	I0422 17:26:35.701722   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 116/120
	I0422 17:26:36.702979   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 117/120
	I0422 17:26:37.704451   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 118/120
	I0422 17:26:38.705701   36480 main.go:141] libmachine: (ha-025067-m03) Waiting for machine to stop 119/120
	I0422 17:26:39.706780   36480 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 17:26:39.706835   36480 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 17:26:39.709043   36480 out.go:177] 
	W0422 17:26:39.710531   36480 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 17:26:39.710548   36480 out.go:239] * 
	* 
	W0422 17:26:39.713066   36480 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 17:26:39.714794   36480 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-025067 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-025067 --wait=true -v=7 --alsologtostderr
E0422 17:26:46.689009   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:30:07.902577   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:31:19.002595   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-025067 --wait=true -v=7 --alsologtostderr: (4m54.699883504s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-025067
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-025067 -n ha-025067
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-025067 logs -n 25: (2.007198303s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m04 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp testdata/cp-test.txt                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067:/home/docker/cp-test_ha-025067-m04_ha-025067.txt                      |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067 sudo cat                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067.txt                                |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03:/home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m03 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-025067 node stop m02 -v=7                                                    | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-025067 node start m02 -v=7                                                   | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:23 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-025067 -v=7                                                          | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-025067 -v=7                                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-025067 --wait=true -v=7                                                   | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:26 UTC | 22 Apr 24 17:31 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-025067                                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:31 UTC |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:26:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:26:39.773338   36982 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:26:39.773457   36982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:26:39.773467   36982 out.go:304] Setting ErrFile to fd 2...
	I0422 17:26:39.773470   36982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:26:39.773648   36982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:26:39.774180   36982 out.go:298] Setting JSON to false
	I0422 17:26:39.775116   36982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4145,"bootTime":1713802655,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:26:39.775195   36982 start.go:139] virtualization: kvm guest
	I0422 17:26:39.777634   36982 out.go:177] * [ha-025067] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:26:39.779353   36982 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:26:39.779296   36982 notify.go:220] Checking for updates...
	I0422 17:26:39.781922   36982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:26:39.783402   36982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:26:39.784628   36982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:26:39.785849   36982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:26:39.787070   36982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:26:39.788888   36982 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:26:39.788980   36982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:26:39.789382   36982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:26:39.789449   36982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:26:39.804180   36982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0422 17:26:39.804636   36982 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:26:39.805194   36982 main.go:141] libmachine: Using API Version  1
	I0422 17:26:39.805230   36982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:26:39.805541   36982 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:26:39.805755   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:26:39.840752   36982 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 17:26:39.842158   36982 start.go:297] selected driver: kvm2
	I0422 17:26:39.842175   36982 start.go:901] validating driver "kvm2" against &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:26:39.842309   36982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:26:39.842620   36982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:26:39.842680   36982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 17:26:39.857539   36982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 17:26:39.858538   36982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:26:39.858605   36982 cni.go:84] Creating CNI manager for ""
	I0422 17:26:39.858617   36982 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 17:26:39.858684   36982 start.go:340] cluster config:
	{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:26:39.858819   36982 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:26:39.861594   36982 out.go:177] * Starting "ha-025067" primary control-plane node in "ha-025067" cluster
	I0422 17:26:39.862997   36982 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:26:39.863043   36982 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 17:26:39.863053   36982 cache.go:56] Caching tarball of preloaded images
	I0422 17:26:39.863165   36982 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:26:39.863182   36982 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:26:39.863288   36982 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:26:39.863481   36982 start.go:360] acquireMachinesLock for ha-025067: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:26:39.863523   36982 start.go:364] duration metric: took 23.028µs to acquireMachinesLock for "ha-025067"
	I0422 17:26:39.863542   36982 start.go:96] Skipping create...Using existing machine configuration
	I0422 17:26:39.863550   36982 fix.go:54] fixHost starting: 
	I0422 17:26:39.863794   36982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:26:39.863829   36982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:26:39.877777   36982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0422 17:26:39.878191   36982 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:26:39.878692   36982 main.go:141] libmachine: Using API Version  1
	I0422 17:26:39.878712   36982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:26:39.879027   36982 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:26:39.879266   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:26:39.879443   36982 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:26:39.880922   36982 fix.go:112] recreateIfNeeded on ha-025067: state=Running err=<nil>
	W0422 17:26:39.880938   36982 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 17:26:39.883730   36982 out.go:177] * Updating the running kvm2 "ha-025067" VM ...
	I0422 17:26:39.885028   36982 machine.go:94] provisionDockerMachine start ...
	I0422 17:26:39.885049   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:26:39.885236   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:39.887733   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:39.888228   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:39.888247   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:39.888398   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:39.888576   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:39.888730   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:39.888863   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:39.889001   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:39.889173   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:39.889186   36982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 17:26:40.004628   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067
	
	I0422 17:26:40.004656   36982 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:26:40.004907   36982 buildroot.go:166] provisioning hostname "ha-025067"
	I0422 17:26:40.004940   36982 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:26:40.005159   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.007930   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.008342   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.008365   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.008547   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.008733   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.008886   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.009066   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.009277   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:40.009449   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:40.009461   36982 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067 && echo "ha-025067" | sudo tee /etc/hostname
	I0422 17:26:40.142485   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067
	
	I0422 17:26:40.142548   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.145459   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.145923   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.145961   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.146152   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.146345   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.146481   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.146616   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.146760   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:40.146944   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:40.146968   36982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:26:40.264502   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:26:40.264528   36982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:26:40.264544   36982 buildroot.go:174] setting up certificates
	I0422 17:26:40.264551   36982 provision.go:84] configureAuth start
	I0422 17:26:40.264558   36982 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:26:40.264812   36982 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:26:40.267914   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.268333   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.268376   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.268627   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.270810   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.271269   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.271293   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.271413   36982 provision.go:143] copyHostCerts
	I0422 17:26:40.271446   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:26:40.271483   36982 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:26:40.271492   36982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:26:40.271572   36982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:26:40.271676   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:26:40.271703   36982 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:26:40.271714   36982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:26:40.271751   36982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:26:40.271873   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:26:40.271899   36982 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:26:40.271909   36982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:26:40.271941   36982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:26:40.272007   36982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067 san=[127.0.0.1 192.168.39.22 ha-025067 localhost minikube]
	I0422 17:26:40.557019   36982 provision.go:177] copyRemoteCerts
	I0422 17:26:40.557080   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:26:40.557102   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.560223   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.560595   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.560622   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.560760   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.560961   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.561156   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.561307   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:26:40.650872   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:26:40.650947   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:26:40.679611   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:26:40.679709   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0422 17:26:40.716656   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:26:40.716754   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 17:26:40.760045   36982 provision.go:87] duration metric: took 495.482143ms to configureAuth
	I0422 17:26:40.760071   36982 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:26:40.760305   36982 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:26:40.760385   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.763115   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.763513   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.763546   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.763708   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.763925   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.764066   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.764241   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.764392   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:40.764587   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:40.764604   36982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:28:11.669894   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:28:11.669925   36982 machine.go:97] duration metric: took 1m31.784881509s to provisionDockerMachine
	I0422 17:28:11.669941   36982 start.go:293] postStartSetup for "ha-025067" (driver="kvm2")
	I0422 17:28:11.669954   36982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:28:11.669976   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:11.670251   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:28:11.670305   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:11.673313   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.673768   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:11.673795   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.674016   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:11.674185   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.674341   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:11.674511   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:28:11.760640   36982 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:28:11.765114   36982 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:28:11.765136   36982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:28:11.765211   36982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:28:11.765298   36982 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:28:11.765309   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:28:11.765386   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:28:11.776083   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:28:11.802189   36982 start.go:296] duration metric: took 132.232595ms for postStartSetup
	I0422 17:28:11.802230   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:11.802531   36982 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0422 17:28:11.802559   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:11.804882   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.805306   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:11.805331   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.805514   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:11.805680   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.805833   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:11.805992   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	W0422 17:28:11.890706   36982 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0422 17:28:11.890735   36982 fix.go:56] duration metric: took 1m32.027183955s for fixHost
	I0422 17:28:11.890763   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:11.893368   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.893705   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:11.893748   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.893877   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:11.894045   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.894200   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.894349   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:11.894486   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:28:11.894681   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:28:11.894693   36982 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:28:12.010085   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806891.989147578
	
	I0422 17:28:12.010107   36982 fix.go:216] guest clock: 1713806891.989147578
	I0422 17:28:12.010114   36982 fix.go:229] Guest: 2024-04-22 17:28:11.989147578 +0000 UTC Remote: 2024-04-22 17:28:11.890747238 +0000 UTC m=+92.165568008 (delta=98.40034ms)
	I0422 17:28:12.010131   36982 fix.go:200] guest clock delta is within tolerance: 98.40034ms
	I0422 17:28:12.010136   36982 start.go:83] releasing machines lock for "ha-025067", held for 1m32.146602236s
	I0422 17:28:12.010155   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.010458   36982 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:28:12.012946   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.013365   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:12.013388   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.013526   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.013993   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.014171   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.014265   36982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:28:12.014327   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:12.014343   36982 ssh_runner.go:195] Run: cat /version.json
	I0422 17:28:12.014362   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:12.016887   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017211   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017245   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:12.017267   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017397   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:12.017557   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:12.017604   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:12.017623   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017680   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:12.017771   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:12.017839   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:28:12.017935   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:12.018082   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:12.018214   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:28:12.133366   36982 ssh_runner.go:195] Run: systemctl --version
	I0422 17:28:12.140018   36982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:28:12.307651   36982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:28:12.314358   36982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:28:12.314430   36982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:28:12.325562   36982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 17:28:12.325590   36982 start.go:494] detecting cgroup driver to use...
	I0422 17:28:12.325652   36982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:28:12.343610   36982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:28:12.359089   36982 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:28:12.359157   36982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:28:12.374436   36982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:28:12.389361   36982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:28:12.540222   36982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:28:12.691473   36982 docker.go:233] disabling docker service ...
	I0422 17:28:12.691534   36982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:28:12.710923   36982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:28:12.725551   36982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:28:12.877489   36982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:28:13.030445   36982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:28:13.044709   36982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:28:13.065897   36982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:28:13.065954   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.077198   36982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:28:13.077259   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.087966   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.099022   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.110116   36982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:28:13.120899   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.131337   36982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.143635   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.154572   36982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:28:13.164479   36982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:28:13.174198   36982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:28:13.328596   36982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:28:15.591596   36982 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.262959858s)
	I0422 17:28:15.591636   36982 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:28:15.591691   36982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:28:15.596873   36982 start.go:562] Will wait 60s for crictl version
	I0422 17:28:15.596954   36982 ssh_runner.go:195] Run: which crictl
	I0422 17:28:15.601220   36982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:28:15.645216   36982 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:28:15.645295   36982 ssh_runner.go:195] Run: crio --version
	I0422 17:28:15.680613   36982 ssh_runner.go:195] Run: crio --version
	I0422 17:28:15.712048   36982 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:28:15.713251   36982 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:28:15.715865   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:15.716284   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:15.716311   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:15.716506   36982 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:28:15.721464   36982 kubeadm.go:877] updating cluster {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 17:28:15.721606   36982 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:28:15.721670   36982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:28:15.767510   36982 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:28:15.767531   36982 crio.go:433] Images already preloaded, skipping extraction
	I0422 17:28:15.767582   36982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:28:15.805433   36982 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:28:15.805453   36982 cache_images.go:84] Images are preloaded, skipping loading
	I0422 17:28:15.805461   36982 kubeadm.go:928] updating node { 192.168.39.22 8443 v1.30.0 crio true true} ...
	I0422 17:28:15.805578   36982 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:28:15.805640   36982 ssh_runner.go:195] Run: crio config
	I0422 17:28:15.860135   36982 cni.go:84] Creating CNI manager for ""
	I0422 17:28:15.860154   36982 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 17:28:15.860162   36982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 17:28:15.860182   36982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-025067 NodeName:ha-025067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 17:28:15.860312   36982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-025067"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 17:28:15.860330   36982 kube-vip.go:111] generating kube-vip config ...
	I0422 17:28:15.860366   36982 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:28:15.873243   36982 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:28:15.873377   36982 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:28:15.873431   36982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:28:15.883898   36982 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 17:28:15.883975   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 17:28:15.894183   36982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0422 17:28:15.912857   36982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:28:15.930861   36982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0422 17:28:15.981653   36982 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 17:28:16.038420   36982 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:28:16.044450   36982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:28:16.262632   36982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:28:16.279530   36982 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.22
	I0422 17:28:16.279557   36982 certs.go:194] generating shared ca certs ...
	I0422 17:28:16.279577   36982 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:28:16.279773   36982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:28:16.279841   36982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:28:16.279859   36982 certs.go:256] generating profile certs ...
	I0422 17:28:16.279969   36982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:28:16.280006   36982 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f
	I0422 17:28:16.280026   36982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.56 192.168.39.220 192.168.39.254]
	I0422 17:28:16.379927   36982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f ...
	I0422 17:28:16.379960   36982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f: {Name:mkcca004100db755f659718d99336dc23fea15d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:28:16.380155   36982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f ...
	I0422 17:28:16.380172   36982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f: {Name:mk1fc3b370c6e75b85764b9a115dc2c170aa8ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:28:16.380272   36982 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:28:16.380456   36982 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:28:16.380589   36982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:28:16.380606   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:28:16.380617   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:28:16.380630   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:28:16.380641   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:28:16.380651   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:28:16.380662   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:28:16.380679   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:28:16.380692   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:28:16.380740   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:28:16.380765   36982 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:28:16.380775   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:28:16.380795   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:28:16.380815   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:28:16.380837   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:28:16.380873   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:28:16.380899   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.380912   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.380925   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.381427   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:28:16.410294   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:28:16.438521   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:28:16.465847   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:28:16.493212   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 17:28:16.520182   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 17:28:16.547255   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:28:16.573323   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:28:16.598513   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:28:16.624114   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:28:16.648891   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:28:16.673889   36982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 17:28:16.690995   36982 ssh_runner.go:195] Run: openssl version
	I0422 17:28:16.697122   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:28:16.708554   36982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.713279   36982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.713322   36982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.719285   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:28:16.729638   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:28:16.741269   36982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.746056   36982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.746103   36982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.752064   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:28:16.761667   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:28:16.772528   36982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.777108   36982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.777150   36982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.782890   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:28:16.792745   36982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:28:16.797462   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 17:28:16.803577   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 17:28:16.809563   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 17:28:16.815662   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 17:28:16.821701   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 17:28:16.827932   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 17:28:16.834152   36982 kubeadm.go:391] StartCluster: {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:28:16.834300   36982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 17:28:16.834361   36982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 17:28:16.875599   36982 cri.go:89] found id: "709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55"
	I0422 17:28:16.875626   36982 cri.go:89] found id: "36ec323d3d57efa3a7865ea0b4c446d1d9693b29cfb5c8e4b0a8565ee2168e49"
	I0422 17:28:16.875632   36982 cri.go:89] found id: "53b06374b06dd939153ea52cde33ea4c9e5af1b0e71567ac085fa60e1b50dfcc"
	I0422 17:28:16.875637   36982 cri.go:89] found id: "28513ecdee88f049380a68df0b9401d304006fbcb6989f1646099457402abd21"
	I0422 17:28:16.875641   36982 cri.go:89] found id: "478a7702f9b4c7965c1fd709bf3d979b179890a418978a00a44bbf3e96db423f"
	I0422 17:28:16.875646   36982 cri.go:89] found id: "0b296211b78c6b52e828836706f135ff2f7d87792e805046d4ed6b64f100a063"
	I0422 17:28:16.875649   36982 cri.go:89] found id: "2c0e4f60a87d1a3c2f83f01e0fa5a6937f3791fa29bc400e9be66081cc41c0ca"
	I0422 17:28:16.875653   36982 cri.go:89] found id: "c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55"
	I0422 17:28:16.875658   36982 cri.go:89] found id: "524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0"
	I0422 17:28:16.875666   36982 cri.go:89] found id: "f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72"
	I0422 17:28:16.875670   36982 cri.go:89] found id: "ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3"
	I0422 17:28:16.875683   36982 cri.go:89] found id: "b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f"
	I0422 17:28:16.875691   36982 cri.go:89] found id: "549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89"
	I0422 17:28:16.875695   36982 cri.go:89] found id: "819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158"
	I0422 17:28:16.875703   36982 cri.go:89] found id: "9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b"
	I0422 17:28:16.875708   36982 cri.go:89] found id: ""
	I0422 17:28:16.875747   36982 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.270197257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807095270157548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7499b72a-7c8f-49b0-af26-85e510b6de9e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.270754300Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5bcdfd46-656c-44a4-ad60-c5f4702d7199 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.270820743Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5bcdfd46-656c-44a4-ad60-c5f4702d7199 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.271613692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5bcdfd46-656c-44a4-ad60-c5f4702d7199 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.325671736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64bfbf8f-f84a-4937-8e79-a6987d533a2a name=/runtime.v1.RuntimeService/Version
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.325747138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64bfbf8f-f84a-4937-8e79-a6987d533a2a name=/runtime.v1.RuntimeService/Version
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.327542823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59159032-d67c-485d-98a7-dedebb5bfa90 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.327956431Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807095327933949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59159032-d67c-485d-98a7-dedebb5bfa90 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.328643101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52a41939-8931-4645-aae9-bd503a2ffe2b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.328705996Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52a41939-8931-4645-aae9-bd503a2ffe2b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.329212798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52a41939-8931-4645-aae9-bd503a2ffe2b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.382619880Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f7065aa-deba-4383-a647-502f54d5004b name=/runtime.v1.RuntimeService/Version
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.382699464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f7065aa-deba-4383-a647-502f54d5004b name=/runtime.v1.RuntimeService/Version
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.384209281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cef31ac-d52d-4811-8f06-d66e2fb3e95a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.384908521Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807095384884010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cef31ac-d52d-4811-8f06-d66e2fb3e95a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.385568553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c80a5f0-f12d-4f8d-bf4c-406f5e5566b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.385623242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c80a5f0-f12d-4f8d-bf4c-406f5e5566b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.386267003Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c80a5f0-f12d-4f8d-bf4c-406f5e5566b2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.442425549Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4bf5c15-fb75-448d-9a09-91df090580a3 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.442503925Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4bf5c15-fb75-448d-9a09-91df090580a3 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.444942040Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b09cc010-98c5-421c-86e0-b82c43709ab0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.445432821Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807095445404728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b09cc010-98c5-421c-86e0-b82c43709ab0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.446287028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c612465e-cf6b-4d28-9c6f-a6eda94332e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.446386109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c612465e-cf6b-4d28-9c6f-a6eda94332e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:31:35 ha-025067 crio[3943]: time="2024-04-22 17:31:35.446800824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c612465e-cf6b-4d28-9c6f-a6eda94332e9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94c2813a81b57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       5                   b09e16f41fde5       storage-provisioner
	ad6996152a42b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   23783db6b701c       kindnet-tmxd9
	d87a0b06fb028       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      2 minutes ago        Running             kube-apiserver            3                   d19bd18d24b2b       kube-apiserver-ha-025067
	5b806d49ec726       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      2 minutes ago        Running             kube-controller-manager   2                   aad1cce9e2255       kube-controller-manager-ha-025067
	70e83e476939c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   397ddd568c901       busybox-fc5497c4f-l97ld
	2291407bc9dbd       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   69b798cf09790       kube-vip-ha-025067
	699b754e81059       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago        Exited              kindnet-cni               3                   23783db6b701c       kindnet-tmxd9
	638b2dd05dfbb       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago        Running             kube-proxy                1                   87020be3f343a       kube-proxy-pf7cc
	efa14e10b593c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago        Running             etcd                      1                   69eb4762e010c       etcd-ha-025067
	9a1f08d9bc71f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   69b0bb93f5a8d       coredns-7db6d8ff4d-nswqp
	8b1b8494064dc       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago        Exited              kube-apiserver            2                   d19bd18d24b2b       kube-apiserver-ha-025067
	251e837f0b8a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       4                   b09e16f41fde5       storage-provisioner
	10e0c7cd8590b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago        Running             kube-scheduler            1                   31b4c9e60ad47       kube-scheduler-ha-025067
	05ea6b3902d85       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago        Exited              kube-controller-manager   1                   aad1cce9e2255       kube-controller-manager-ha-025067
	709020245fe70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   cd88edc27f601       coredns-7db6d8ff4d-vrl4h
	983cb8537237f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   3c3abb6c214d4       busybox-fc5497c4f-l97ld
	c0af820e7bd06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   c2921baac16b3       coredns-7db6d8ff4d-vrl4h
	524e02d80347d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   b553b11bb990b       coredns-7db6d8ff4d-nswqp
	f841dcb8dd09b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      13 minutes ago       Exited              kube-proxy                0                   052596614cf9c       kube-proxy-pf7cc
	b3d751e3e8f50       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   3c34eb37cd442       etcd-ha-025067
	549930f1d83f6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      14 minutes ago       Exited              kube-scheduler            0                   c0ff0dbc27bbd       kube-scheduler-ha-025067
	
	
	==> coredns [524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0] <==
	[INFO] 10.244.1.2:49747 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094083s
	[INFO] 10.244.1.2:39851 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094869s
	[INFO] 10.244.1.2:51921 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132016s
	[INFO] 10.244.2.2:46485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151891s
	[INFO] 10.244.2.2:52343 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183731s
	[INFO] 10.244.2.2:36982 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162215s
	[INFO] 10.244.2.2:56193 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471319s
	[INFO] 10.244.2.2:48503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072359s
	[INFO] 10.244.2.2:35429 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006794s
	[INFO] 10.244.2.2:56484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092002s
	[INFO] 10.244.0.4:39516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189987s
	[INFO] 10.244.0.4:60228 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082728s
	[INFO] 10.244.1.2:44703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203159s
	[INFO] 10.244.1.2:33524 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167155s
	[INFO] 10.244.1.2:43201 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098618s
	[INFO] 10.244.2.2:53563 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215578s
	[INFO] 10.244.2.2:54616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163304s
	[INFO] 10.244.0.4:49280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092142s
	[INFO] 10.244.1.2:40544 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116574s
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249064s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:48954->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:48954->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44606->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1117179347]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 17:28:34.718) (total time: 12714ms):
	Trace[1117179347]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44606->10.96.0.1:443: read: connection reset by peer 12714ms (17:28:47.432)
	Trace[1117179347]: [12.714168324s] [12.714168324s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44606->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38882->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38882->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38880->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1333151839]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 17:28:36.263) (total time: 11167ms):
	Trace[1333151839]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38880->10.96.0.1:443: read: connection reset by peer 11167ms (17:28:47.431)
	Trace[1333151839]: [11.167963952s] [11.167963952s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38880->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55] <==
	[INFO] 10.244.0.4:44231 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.015736347s
	[INFO] 10.244.0.4:37322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115326s
	[INFO] 10.244.1.2:58538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135694s
	[INFO] 10.244.1.2:51828 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153493s
	[INFO] 10.244.1.2:44556 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001447535s
	[INFO] 10.244.2.2:44901 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139485s
	[INFO] 10.244.0.4:42667 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108865s
	[INFO] 10.244.0.4:54399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073213s
	[INFO] 10.244.1.2:35127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090826s
	[INFO] 10.244.2.2:52722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185046s
	[INFO] 10.244.2.2:49596 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128238s
	[INFO] 10.244.0.4:59309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125541s
	[INFO] 10.244.0.4:42344 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215786s
	[INFO] 10.244.0.4:34084 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000295612s
	[INFO] 10.244.1.2:50561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016924s
	[INFO] 10.244.1.2:40185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080135s
	[INFO] 10.244.1.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083107s
	[INFO] 10.244.2.2:52310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147992s
	[INFO] 10.244.2.2:48499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103149s
	[INFO] 10.244.2.2:60500 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018474s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-025067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:17:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:31:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:29:06 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:29:06 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:29:06 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:29:06 +0000   Mon, 22 Apr 2024 17:17:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-025067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73a664449fd9403194a5919e23b0871b
	  System UUID:                73a66444-9fd9-4031-94a5-919e23b0871b
	  Boot ID:                    4c2ace2e-318b-4b8f-bd1e-a5f6d5151f88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l97ld              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-nswqp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-vrl4h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-025067                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-tmxd9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-025067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-025067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-pf7cc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-025067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-025067                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 2m30s              kube-proxy       
	  Normal   NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-025067 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-025067 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-025067 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   NodeReady                13m                kubelet          Node ha-025067 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Warning  ContainerGCFailed        4m1s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m24s              node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   RegisteredNode           2m17s              node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   RegisteredNode           33s                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	
	
	Name:               ha-025067-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_18_41_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:31:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-025067-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1f034a156f4a3fb9cb79780785386e
	  System UUID:                8a1f034a-156f-4a3f-b9cb-79780785386e
	  Boot ID:                    0b380f4b-d8f3-41ee-8b4f-eb34838f377a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m6qxt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-025067-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-ctdzp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-025067-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-025067-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-dk5ww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-025067-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-025067-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m24s                  kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-025067-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-025067-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-025067-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  NodeNotReady             9m30s                  node-controller  Node ha-025067-m02 status is now: NodeNotReady
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m57s (x8 over 2m57s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s (x8 over 2m57s)  kubelet          Node ha-025067-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s (x7 over 2m57s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m25s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           34s                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	
	
	Name:               ha-025067-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_19_56_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:19:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:31:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:31:05 +0000   Mon, 22 Apr 2024 17:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:31:05 +0000   Mon, 22 Apr 2024 17:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:31:05 +0000   Mon, 22 Apr 2024 17:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:31:05 +0000   Mon, 22 Apr 2024 17:30:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-025067-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 300afc7a045c4fd490327eb7452e4f8c
	  System UUID:                300afc7a-045c-4fd4-9032-7eb7452e4f8c
	  Boot ID:                    3facd2c6-8c8e-45dc-b7f2-2bd91cd947ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tvcmk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-025067-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-ztcgm                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-025067-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-025067-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-wsr9x                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-025067-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-025067-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 43s                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-025067-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal   RegisteredNode           2m25s              node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	  Normal   NodeNotReady             105s               node-controller  Node ha-025067-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x3 over 62s)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x3 over 62s)  kubelet          Node ha-025067-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x3 over 62s)  kubelet          Node ha-025067-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s (x2 over 62s)  kubelet          Node ha-025067-m03 has been rebooted, boot id: 3facd2c6-8c8e-45dc-b7f2-2bd91cd947ff
	  Normal   NodeReady                62s (x2 over 62s)  kubelet          Node ha-025067-m03 status is now: NodeReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-025067-m03 event: Registered Node ha-025067-m03 in Controller
	
	
	Name:               ha-025067-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_20_51_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:20:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:31:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:31:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:31:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:31:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:31:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-025067-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfe8f8092cda4851adcca8410e5437c9
	  System UUID:                bfe8f809-2cda-4851-adcc-a8410e5437c9
	  Boot ID:                    5499526e-ba23-4234-be24-d0b1e2a89439
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-d6tpm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-kbhbk    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-025067-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-025067-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m25s              node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   RegisteredNode           2m18s              node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   NodeNotReady             105s               node-controller  Node ha-025067-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           34s                node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x3 over 9s)    kubelet          Node ha-025067-m04 has been rebooted, boot id: 5499526e-ba23-4234-be24-d0b1e2a89439
	  Normal   NodeHasSufficientMemory  9s (x4 over 9s)    kubelet          Node ha-025067-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x4 over 9s)    kubelet          Node ha-025067-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x4 over 9s)    kubelet          Node ha-025067-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-025067-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-025067-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +8.717166] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.065948] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064117] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195469] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.121015] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285063] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.448511] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059431] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.181808] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.968933] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.287359] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.083571] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.934890] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 17:18] kauditd_printk_skb: 74 callbacks suppressed
	[Apr22 17:25] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 17:28] systemd-fstab-generator[3860]: Ignoring "noauto" option for root device
	[  +0.155389] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.184116] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.151233] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +0.286925] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +2.910594] systemd-fstab-generator[4086]: Ignoring "noauto" option for root device
	[  +5.545349] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.865031] kauditd_printk_skb: 89 callbacks suppressed
	[ +10.801229] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 17:29] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f] <==
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-22T17:26:41.187704Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:26:41.187812Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T17:26:41.187917Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"cde0bb267fc4e559","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-22T17:26:41.188199Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188272Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188339Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188472Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188542Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188619Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188653Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188677Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.18873Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.188808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.188927Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.189106Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.189249Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.189289Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.192505Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-04-22T17:26:41.192651Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-04-22T17:26:41.19269Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-025067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"]}
	
	
	==> etcd [efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb] <==
	{"level":"warn","ts":"2024-04-22T17:30:29.738913Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:29.738986Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:33.590746Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9be776574408594","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:33.590948Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9be776574408594","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:33.740661Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:33.740739Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:37.743582Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:37.743755Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:38.590988Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9be776574408594","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:38.5912Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9be776574408594","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:41.746458Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.220:2380/version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:41.746643Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"e9be776574408594","error":"Get \"https://192.168.39.220:2380/version\": dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:43.592224Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e9be776574408594","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-22T17:30:43.592294Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e9be776574408594","rtt":"0s","error":"dial tcp 192.168.39.220:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-22T17:30:45.223461Z","caller":"traceutil/trace.go:171","msg":"trace[1247268506] transaction","detail":"{read_only:false; response_revision:2400; number_of_response:1; }","duration":"125.659943ms","start":"2024-04-22T17:30:45.097706Z","end":"2024-04-22T17:30:45.223365Z","steps":["trace[1247268506] 'process raft request'  (duration: 125.547976ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:30:45.763439Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:30:45.792377Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:30:45.794941Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:30:45.821755Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"cde0bb267fc4e559","to":"e9be776574408594","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-22T17:30:45.821822Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:30:45.827313Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"cde0bb267fc4e559","to":"e9be776574408594","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-22T17:30:45.827376Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:30:47.040794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.720353ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16526397567203304466 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.22\" mod_revision:2371 > success:<request_put:<key:\"/registry/masterleases/192.168.39.22\" value_size:66 lease:7303025530348528656 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.22\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T17:30:47.041097Z","caller":"traceutil/trace.go:171","msg":"trace[1813464151] transaction","detail":"{read_only:false; response_revision:2408; number_of_response:1; }","duration":"201.73996ms","start":"2024-04-22T17:30:46.839282Z","end":"2024-04-22T17:30:47.041022Z","steps":["trace[1813464151] 'process raft request'  (duration: 68.495556ms)","trace[1813464151] 'compare'  (duration: 131.597074ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:30:57.485267Z","caller":"traceutil/trace.go:171","msg":"trace[1103622926] transaction","detail":"{read_only:false; response_revision:2462; number_of_response:1; }","duration":"101.077242ms","start":"2024-04-22T17:30:57.384174Z","end":"2024-04-22T17:30:57.485251Z","steps":["trace[1103622926] 'process raft request'  (duration: 100.985146ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:31:36 up 14 min,  0 users,  load average: 0.75, 0.56, 0.29
	Linux ha-025067 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [699b754e810591cf7919bf90a3745a3dc53bd122b0550f9d614c7633e290e1ae] <==
	I0422 17:28:22.932846       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0422 17:28:22.932935       1 main.go:107] hostIP = 192.168.39.22
	podIP = 192.168.39.22
	I0422 17:28:22.977581       1 main.go:116] setting mtu 1500 for CNI 
	I0422 17:28:22.977685       1 main.go:146] kindnetd IP family: "ipv4"
	I0422 17:28:22.977712       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0422 17:28:33.285584       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0422 17:28:35.143625       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 17:28:38.215771       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 17:28:41.287714       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 17:28:44.359728       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500] <==
	I0422 17:31:00.509926       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:31:10.527523       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:31:10.527634       1 main.go:227] handling current node
	I0422 17:31:10.527666       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:31:10.527691       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:31:10.527841       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:31:10.527930       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:31:10.528139       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:31:10.528180       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:31:20.538694       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:31:20.538805       1 main.go:227] handling current node
	I0422 17:31:20.538838       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:31:20.538858       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:31:20.539004       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:31:20.539089       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:31:20.539191       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:31:20.539239       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:31:30.575195       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:31:30.575362       1 main.go:227] handling current node
	I0422 17:31:30.575388       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:31:30.575407       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:31:30.575618       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0422 17:31:30.575682       1 main.go:250] Node ha-025067-m03 has CIDR [10.244.2.0/24] 
	I0422 17:31:30.575795       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:31:30.575840       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5] <==
	I0422 17:28:23.071110       1 options.go:221] external host was not specified, using 192.168.39.22
	I0422 17:28:23.106299       1 server.go:148] Version: v1.30.0
	I0422 17:28:23.106409       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:28:24.324796       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0422 17:28:24.330665       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 17:28:24.333948       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0422 17:28:24.334000       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0422 17:28:24.334254       1 instance.go:299] Using reconciler: lease
	W0422 17:28:44.324402       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0422 17:28:44.324862       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0422 17:28:44.335926       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348] <==
	I0422 17:29:05.378496       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0422 17:29:05.378512       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0422 17:29:05.378529       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0422 17:29:05.439822       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 17:29:05.440617       1 aggregator.go:165] initial CRD sync complete...
	I0422 17:29:05.440784       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 17:29:05.440813       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 17:29:05.441447       1 cache.go:39] Caches are synced for autoregister controller
	I0422 17:29:05.481565       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 17:29:05.483914       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 17:29:05.483975       1 policy_source.go:224] refreshing policies
	I0422 17:29:05.531136       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 17:29:05.536093       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 17:29:05.536163       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 17:29:05.536239       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 17:29:05.537003       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 17:29:05.537134       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 17:29:05.538363       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 17:29:05.541735       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0422 17:29:05.577284       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.56]
	I0422 17:29:05.586584       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 17:29:05.612414       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0422 17:29:05.616597       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0422 17:29:06.351020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 17:29:06.838219       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.220 192.168.39.56]
	
	
	==> kube-controller-manager [05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b] <==
	I0422 17:28:23.752439       1 serving.go:380] Generated self-signed cert in-memory
	I0422 17:28:24.167927       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 17:28:24.167977       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:28:24.170896       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 17:28:24.171079       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 17:28:24.171663       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 17:28:24.171737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0422 17:28:45.344787       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8443/healthz\": dial tcp 192.168.39.22:8443: connect: connection refused"
	
	
	==> kube-controller-manager [5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8] <==
	I0422 17:29:18.967160       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m02"
	I0422 17:29:18.967188       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m03"
	I0422 17:29:18.967210       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-025067-m04"
	I0422 17:29:18.967530       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0422 17:29:18.969595       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 17:29:18.974326       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0422 17:29:18.979967       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 17:29:18.985558       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 17:29:19.383920       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 17:29:19.450287       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 17:29:19.450385       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 17:29:30.367943       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-knd6m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-knd6m\": the object has been modified; please apply your changes to the latest version and try again"
	I0422 17:29:30.368513       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0d51303a-b2b3-4e0f-9a10-dab22f45a23b", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-knd6m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-knd6m": the object has been modified; please apply your changes to the latest version and try again
	I0422 17:29:30.404214       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.898947ms"
	I0422 17:29:30.404382       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="93.107µs"
	I0422 17:29:51.905230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.055798ms"
	I0422 17:29:51.905583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="110.133µs"
	I0422 17:30:00.351986       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-knd6m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-knd6m\": the object has been modified; please apply your changes to the latest version and try again"
	I0422 17:30:00.352969       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0d51303a-b2b3-4e0f-9a10-dab22f45a23b", APIVersion:"v1", ResourceVersion:"240", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-knd6m EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-knd6m": the object has been modified; please apply your changes to the latest version and try again
	I0422 17:30:00.384844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.96924ms"
	I0422 17:30:00.385148       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="197.991µs"
	I0422 17:30:35.891883       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.117µs"
	I0422 17:30:55.195989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.403764ms"
	I0422 17:30:55.196213       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.958µs"
	I0422 17:31:27.384375       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-025067-m04"
	
	
	==> kube-proxy [638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231] <==
	I0422 17:28:24.336531       1 server_linux.go:69] "Using iptables proxy"
	E0422 17:28:26.952527       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:30.024282       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:33.096220       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:39.242317       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:48.455755       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0422 17:29:05.607339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	I0422 17:29:05.665981       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:29:05.666124       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:29:05.666163       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:29:05.669443       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:29:05.669961       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:29:05.670020       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:29:05.672567       1 config.go:192] "Starting service config controller"
	I0422 17:29:05.672656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:29:05.672788       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:29:05.672874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:29:05.673976       1 config.go:319] "Starting node config controller"
	I0422 17:29:05.674020       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:29:05.773853       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:29:05.773930       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:29:05.774404       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72] <==
	E0422 17:25:37.994304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:41.064302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:41.064635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:41.064833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:41.065357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:41.065491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:41.065632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:47.209155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:47.209244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:47.209338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:47.209413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:47.209486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:47.209529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:56.425639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:56.426398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:59.496576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:59.496708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:02.568615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:02.568807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:11.785431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:11.785512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:24.074288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:24.074502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:27.144183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:27.144246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266] <==
	W0422 17:28:59.999431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.22:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:28:59.999531       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.22:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:00.248383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.22:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:00.248515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.22:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.063657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.063718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.331333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.22:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.331396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.22:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.497635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.22:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.497749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.22:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.727741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.22:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.727803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.22:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:03.363886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.22:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:03.363948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.22:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:05.418464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:29:05.419019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:29:05.418849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:29:05.419335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 17:29:05.418914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:29:05.419478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:29:05.418958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:29:05.419667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 17:29:05.421363       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:29:05.421473       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 17:29:22.055126       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89] <==
	W0422 17:26:33.232965       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:33.232999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 17:26:33.323725       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:26:33.323785       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 17:26:33.619457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:26:33.619517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:26:33.632550       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:26:33.632574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:26:33.866280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:26:33.866421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 17:26:33.908821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:33.908922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 17:26:34.117983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:26:34.118076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 17:26:34.736974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:34.737085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 17:26:34.867445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:26:34.867475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 17:26:35.728687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:26:35.728807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:26:40.289716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 17:26:40.289876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 17:26:40.710556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:40.710598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:40.897602       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 17:29:31 ha-025067 kubelet[1368]: I0422 17:29:31.460555    1368 scope.go:117] "RemoveContainer" containerID="251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f"
	Apr 22 17:29:31 ha-025067 kubelet[1368]: E0422 17:29:31.461392    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b)\"" pod="kube-system/storage-provisioner" podUID="68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b"
	Apr 22 17:29:34 ha-025067 kubelet[1368]: E0422 17:29:34.516976    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:29:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:29:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:29:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:29:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:29:39 ha-025067 kubelet[1368]: I0422 17:29:39.459719    1368 scope.go:117] "RemoveContainer" containerID="699b754e810591cf7919bf90a3745a3dc53bd122b0550f9d614c7633e290e1ae"
	Apr 22 17:29:42 ha-025067 kubelet[1368]: I0422 17:29:42.459956    1368 scope.go:117] "RemoveContainer" containerID="251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f"
	Apr 22 17:29:42 ha-025067 kubelet[1368]: E0422 17:29:42.460254    1368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b)\"" pod="kube-system/storage-provisioner" podUID="68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b"
	Apr 22 17:29:45 ha-025067 kubelet[1368]: I0422 17:29:45.460571    1368 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-025067" podUID="8c381060-83d4-411b-98ac-c6b1842cd3d8"
	Apr 22 17:29:45 ha-025067 kubelet[1368]: I0422 17:29:45.481007    1368 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-025067"
	Apr 22 17:29:56 ha-025067 kubelet[1368]: I0422 17:29:56.463942    1368 scope.go:117] "RemoveContainer" containerID="251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f"
	Apr 22 17:29:59 ha-025067 kubelet[1368]: I0422 17:29:59.264908    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-l97ld" podStartSLOduration=583.649191038 podStartE2EDuration="9m46.264878956s" podCreationTimestamp="2024-04-22 17:20:13 +0000 UTC" firstStartedPulling="2024-04-22 17:20:14.247298805 +0000 UTC m=+159.963554649" lastFinishedPulling="2024-04-22 17:20:16.862986722 +0000 UTC m=+162.579242567" observedRunningTime="2024-04-22 17:20:17.257430596 +0000 UTC m=+162.973686464" watchObservedRunningTime="2024-04-22 17:29:59.264878956 +0000 UTC m=+744.981134820"
	Apr 22 17:29:59 ha-025067 kubelet[1368]: I0422 17:29:59.286984    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-025067" podStartSLOduration=14.286963447 podStartE2EDuration="14.286963447s" podCreationTimestamp="2024-04-22 17:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 17:29:59.283570209 +0000 UTC m=+744.999826071" watchObservedRunningTime="2024-04-22 17:29:59.286963447 +0000 UTC m=+745.003219311"
	Apr 22 17:30:34 ha-025067 kubelet[1368]: E0422 17:30:34.512436    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:30:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:30:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:30:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:30:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:31:34 ha-025067 kubelet[1368]: E0422 17:31:34.518595    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:31:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:31:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:31:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:31:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:31:34.928196   38510 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-025067 -n ha-025067
helpers_test.go:261: (dbg) Run:  kubectl --context ha-025067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (419.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 stop -v=7 --alsologtostderr: exit status 82 (2m0.498103441s)

                                                
                                                
-- stdout --
	* Stopping node "ha-025067-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:31:55.132918   38917 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:31:55.133027   38917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:31:55.133039   38917 out.go:304] Setting ErrFile to fd 2...
	I0422 17:31:55.133044   38917 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:31:55.133252   38917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:31:55.133536   38917 out.go:298] Setting JSON to false
	I0422 17:31:55.133634   38917 mustload.go:65] Loading cluster: ha-025067
	I0422 17:31:55.134015   38917 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:31:55.134113   38917 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:31:55.134313   38917 mustload.go:65] Loading cluster: ha-025067
	I0422 17:31:55.134516   38917 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:31:55.134555   38917 stop.go:39] StopHost: ha-025067-m04
	I0422 17:31:55.134984   38917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:31:55.135031   38917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:31:55.150172   38917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0422 17:31:55.150646   38917 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:31:55.151278   38917 main.go:141] libmachine: Using API Version  1
	I0422 17:31:55.151323   38917 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:31:55.151687   38917 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:31:55.153955   38917 out.go:177] * Stopping node "ha-025067-m04"  ...
	I0422 17:31:55.155364   38917 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 17:31:55.155397   38917 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:31:55.155626   38917 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 17:31:55.155673   38917 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:31:55.158444   38917 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:31:55.158964   38917 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:31:22 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:31:55.159001   38917 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:31:55.159208   38917 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:31:55.159416   38917 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:31:55.159580   38917 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:31:55.159734   38917 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	I0422 17:31:55.252291   38917 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 17:31:55.306427   38917 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 17:31:55.359973   38917 main.go:141] libmachine: Stopping "ha-025067-m04"...
	I0422 17:31:55.360018   38917 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:31:55.361629   38917 main.go:141] libmachine: (ha-025067-m04) Calling .Stop
	I0422 17:31:55.364957   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 0/120
	I0422 17:31:56.366171   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 1/120
	I0422 17:31:57.367738   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 2/120
	I0422 17:31:58.369122   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 3/120
	I0422 17:31:59.370759   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 4/120
	I0422 17:32:00.372928   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 5/120
	I0422 17:32:01.374678   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 6/120
	I0422 17:32:02.376156   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 7/120
	I0422 17:32:03.377685   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 8/120
	I0422 17:32:04.378958   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 9/120
	I0422 17:32:05.381145   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 10/120
	I0422 17:32:06.382581   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 11/120
	I0422 17:32:07.383954   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 12/120
	I0422 17:32:08.385302   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 13/120
	I0422 17:32:09.386994   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 14/120
	I0422 17:32:10.389190   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 15/120
	I0422 17:32:11.390686   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 16/120
	I0422 17:32:12.392025   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 17/120
	I0422 17:32:13.393858   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 18/120
	I0422 17:32:14.395386   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 19/120
	I0422 17:32:15.397577   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 20/120
	I0422 17:32:16.398916   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 21/120
	I0422 17:32:17.400518   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 22/120
	I0422 17:32:18.402107   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 23/120
	I0422 17:32:19.404047   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 24/120
	I0422 17:32:20.406177   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 25/120
	I0422 17:32:21.407473   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 26/120
	I0422 17:32:22.409751   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 27/120
	I0422 17:32:23.411112   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 28/120
	I0422 17:32:24.412531   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 29/120
	I0422 17:32:25.414713   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 30/120
	I0422 17:32:26.416194   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 31/120
	I0422 17:32:27.417623   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 32/120
	I0422 17:32:28.418998   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 33/120
	I0422 17:32:29.421027   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 34/120
	I0422 17:32:30.423159   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 35/120
	I0422 17:32:31.424572   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 36/120
	I0422 17:32:32.426125   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 37/120
	I0422 17:32:33.427575   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 38/120
	I0422 17:32:34.429698   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 39/120
	I0422 17:32:35.431530   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 40/120
	I0422 17:32:36.432954   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 41/120
	I0422 17:32:37.434225   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 42/120
	I0422 17:32:38.435996   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 43/120
	I0422 17:32:39.437692   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 44/120
	I0422 17:32:40.439863   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 45/120
	I0422 17:32:41.441198   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 46/120
	I0422 17:32:42.442708   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 47/120
	I0422 17:32:43.444143   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 48/120
	I0422 17:32:44.446295   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 49/120
	I0422 17:32:45.448321   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 50/120
	I0422 17:32:46.449774   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 51/120
	I0422 17:32:47.451153   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 52/120
	I0422 17:32:48.452866   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 53/120
	I0422 17:32:49.454208   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 54/120
	I0422 17:32:50.456093   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 55/120
	I0422 17:32:51.457460   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 56/120
	I0422 17:32:52.458839   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 57/120
	I0422 17:32:53.460174   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 58/120
	I0422 17:32:54.461661   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 59/120
	I0422 17:32:55.464020   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 60/120
	I0422 17:32:56.465204   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 61/120
	I0422 17:32:57.466482   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 62/120
	I0422 17:32:58.467982   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 63/120
	I0422 17:32:59.469288   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 64/120
	I0422 17:33:00.471201   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 65/120
	I0422 17:33:01.472585   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 66/120
	I0422 17:33:02.474733   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 67/120
	I0422 17:33:03.476229   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 68/120
	I0422 17:33:04.477712   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 69/120
	I0422 17:33:05.479654   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 70/120
	I0422 17:33:06.480955   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 71/120
	I0422 17:33:07.482346   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 72/120
	I0422 17:33:08.484071   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 73/120
	I0422 17:33:09.485379   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 74/120
	I0422 17:33:10.487336   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 75/120
	I0422 17:33:11.488601   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 76/120
	I0422 17:33:12.489837   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 77/120
	I0422 17:33:13.491237   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 78/120
	I0422 17:33:14.492618   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 79/120
	I0422 17:33:15.494794   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 80/120
	I0422 17:33:16.496685   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 81/120
	I0422 17:33:17.498855   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 82/120
	I0422 17:33:18.500261   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 83/120
	I0422 17:33:19.501714   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 84/120
	I0422 17:33:20.503665   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 85/120
	I0422 17:33:21.504941   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 86/120
	I0422 17:33:22.506429   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 87/120
	I0422 17:33:23.507793   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 88/120
	I0422 17:33:24.509703   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 89/120
	I0422 17:33:25.512139   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 90/120
	I0422 17:33:26.513833   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 91/120
	I0422 17:33:27.515600   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 92/120
	I0422 17:33:28.516966   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 93/120
	I0422 17:33:29.518794   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 94/120
	I0422 17:33:30.520971   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 95/120
	I0422 17:33:31.522304   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 96/120
	I0422 17:33:32.523724   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 97/120
	I0422 17:33:33.525059   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 98/120
	I0422 17:33:34.526494   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 99/120
	I0422 17:33:35.528803   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 100/120
	I0422 17:33:36.530164   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 101/120
	I0422 17:33:37.532111   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 102/120
	I0422 17:33:38.533623   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 103/120
	I0422 17:33:39.535213   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 104/120
	I0422 17:33:40.537318   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 105/120
	I0422 17:33:41.538671   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 106/120
	I0422 17:33:42.540746   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 107/120
	I0422 17:33:43.542181   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 108/120
	I0422 17:33:44.543768   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 109/120
	I0422 17:33:45.545891   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 110/120
	I0422 17:33:46.547252   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 111/120
	I0422 17:33:47.549596   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 112/120
	I0422 17:33:48.551195   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 113/120
	I0422 17:33:49.552502   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 114/120
	I0422 17:33:50.554667   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 115/120
	I0422 17:33:51.555982   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 116/120
	I0422 17:33:52.557527   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 117/120
	I0422 17:33:53.560080   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 118/120
	I0422 17:33:54.561422   38917 main.go:141] libmachine: (ha-025067-m04) Waiting for machine to stop 119/120
	I0422 17:33:55.562042   38917 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 17:33:55.562104   38917 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 17:33:55.564254   38917 out.go:177] 
	W0422 17:33:55.565799   38917 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 17:33:55.565816   38917 out.go:239] * 
	* 
	W0422 17:33:55.568119   38917 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 17:33:55.569512   38917 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-025067 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr: exit status 3 (19.079024186s)

                                                
                                                
-- stdout --
	ha-025067
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-025067-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:33:55.630094   39355 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:33:55.630223   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:33:55.630233   39355 out.go:304] Setting ErrFile to fd 2...
	I0422 17:33:55.630237   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:33:55.630441   39355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:33:55.630689   39355 out.go:298] Setting JSON to false
	I0422 17:33:55.630714   39355 mustload.go:65] Loading cluster: ha-025067
	I0422 17:33:55.630830   39355 notify.go:220] Checking for updates...
	I0422 17:33:55.631172   39355 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:33:55.631191   39355 status.go:255] checking status of ha-025067 ...
	I0422 17:33:55.631598   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:55.631656   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:55.648796   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44863
	I0422 17:33:55.649246   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:55.649822   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:55.649851   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:55.650167   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:55.650345   39355 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:33:55.651917   39355 status.go:330] ha-025067 host status = "Running" (err=<nil>)
	I0422 17:33:55.651941   39355 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:33:55.652251   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:55.652294   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:55.666861   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46849
	I0422 17:33:55.667305   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:55.667739   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:55.667766   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:55.668028   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:55.668188   39355 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:33:55.671043   39355 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:33:55.671497   39355 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:33:55.671522   39355 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:33:55.671650   39355 host.go:66] Checking if "ha-025067" exists ...
	I0422 17:33:55.671917   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:55.671951   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:55.686068   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0422 17:33:55.686444   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:55.686877   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:55.686898   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:55.687185   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:55.687376   39355 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:33:55.687574   39355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:33:55.687611   39355 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:33:55.690019   39355 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:33:55.690388   39355 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:33:55.690442   39355 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:33:55.690574   39355 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:33:55.690773   39355 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:33:55.690914   39355 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:33:55.691032   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:33:55.781284   39355 ssh_runner.go:195] Run: systemctl --version
	I0422 17:33:55.793279   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:33:55.811756   39355 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:33:55.811779   39355 api_server.go:166] Checking apiserver status ...
	I0422 17:33:55.811808   39355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:33:55.832543   39355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5340/cgroup
	W0422 17:33:55.843784   39355 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5340/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:33:55.843840   39355 ssh_runner.go:195] Run: ls
	I0422 17:33:55.849829   39355 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:33:55.854433   39355 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:33:55.854454   39355 status.go:422] ha-025067 apiserver status = Running (err=<nil>)
	I0422 17:33:55.854464   39355 status.go:257] ha-025067 status: &{Name:ha-025067 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:33:55.854481   39355 status.go:255] checking status of ha-025067-m02 ...
	I0422 17:33:55.854757   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:55.854788   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:55.869404   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0422 17:33:55.869740   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:55.870192   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:55.870214   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:55.870525   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:55.870732   39355 main.go:141] libmachine: (ha-025067-m02) Calling .GetState
	I0422 17:33:55.872673   39355 status.go:330] ha-025067-m02 host status = "Running" (err=<nil>)
	I0422 17:33:55.872694   39355 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:33:55.872964   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:55.872995   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:55.888916   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32895
	I0422 17:33:55.889403   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:55.889876   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:55.889903   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:55.890217   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:55.890369   39355 main.go:141] libmachine: (ha-025067-m02) Calling .GetIP
	I0422 17:33:55.893462   39355 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:33:55.893924   39355 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:28:28 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:33:55.893950   39355 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:33:55.894226   39355 host.go:66] Checking if "ha-025067-m02" exists ...
	I0422 17:33:55.894651   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:55.894700   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:55.911305   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33211
	I0422 17:33:55.911812   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:55.912274   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:55.912302   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:55.912607   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:55.912804   39355 main.go:141] libmachine: (ha-025067-m02) Calling .DriverName
	I0422 17:33:55.913024   39355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:33:55.913045   39355 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHHostname
	I0422 17:33:55.915793   39355 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:33:55.916315   39355 main.go:141] libmachine: (ha-025067-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f3:68:d1", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:28:28 +0000 UTC Type:0 Mac:52:54:00:f3:68:d1 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-025067-m02 Clientid:01:52:54:00:f3:68:d1}
	I0422 17:33:55.916345   39355 main.go:141] libmachine: (ha-025067-m02) DBG | domain ha-025067-m02 has defined IP address 192.168.39.56 and MAC address 52:54:00:f3:68:d1 in network mk-ha-025067
	I0422 17:33:55.916505   39355 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHPort
	I0422 17:33:55.916666   39355 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHKeyPath
	I0422 17:33:55.916830   39355 main.go:141] libmachine: (ha-025067-m02) Calling .GetSSHUsername
	I0422 17:33:55.916941   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m02/id_rsa Username:docker}
	I0422 17:33:56.005386   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:33:56.029152   39355 kubeconfig.go:125] found "ha-025067" server: "https://192.168.39.254:8443"
	I0422 17:33:56.029190   39355 api_server.go:166] Checking apiserver status ...
	I0422 17:33:56.029233   39355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:33:56.047191   39355 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup
	W0422 17:33:56.057668   39355 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1606/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:33:56.057720   39355 ssh_runner.go:195] Run: ls
	I0422 17:33:56.062550   39355 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0422 17:33:56.067378   39355 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0422 17:33:56.067406   39355 status.go:422] ha-025067-m02 apiserver status = Running (err=<nil>)
	I0422 17:33:56.067415   39355 status.go:257] ha-025067-m02 status: &{Name:ha-025067-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:33:56.067427   39355 status.go:255] checking status of ha-025067-m04 ...
	I0422 17:33:56.067713   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:56.067747   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:56.083326   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0422 17:33:56.083890   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:56.084391   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:56.084417   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:56.084744   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:56.084882   39355 main.go:141] libmachine: (ha-025067-m04) Calling .GetState
	I0422 17:33:56.086604   39355 status.go:330] ha-025067-m04 host status = "Running" (err=<nil>)
	I0422 17:33:56.086624   39355 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:33:56.087003   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:56.087064   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:56.101572   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42173
	I0422 17:33:56.102035   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:56.102607   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:56.102633   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:56.102930   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:56.103101   39355 main.go:141] libmachine: (ha-025067-m04) Calling .GetIP
	I0422 17:33:56.105725   39355 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:33:56.106128   39355 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:31:22 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:33:56.106156   39355 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:33:56.106282   39355 host.go:66] Checking if "ha-025067-m04" exists ...
	I0422 17:33:56.106558   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:33:56.106600   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:33:56.120980   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40515
	I0422 17:33:56.121339   39355 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:33:56.121729   39355 main.go:141] libmachine: Using API Version  1
	I0422 17:33:56.121751   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:33:56.122098   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:33:56.122271   39355 main.go:141] libmachine: (ha-025067-m04) Calling .DriverName
	I0422 17:33:56.122444   39355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:33:56.122471   39355 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHHostname
	I0422 17:33:56.125249   39355 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:33:56.125661   39355 main.go:141] libmachine: (ha-025067-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:b1:49", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:31:22 +0000 UTC Type:0 Mac:52:54:00:20:b1:49 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:ha-025067-m04 Clientid:01:52:54:00:20:b1:49}
	I0422 17:33:56.125690   39355 main.go:141] libmachine: (ha-025067-m04) DBG | domain ha-025067-m04 has defined IP address 192.168.39.80 and MAC address 52:54:00:20:b1:49 in network mk-ha-025067
	I0422 17:33:56.125827   39355 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHPort
	I0422 17:33:56.125962   39355 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHKeyPath
	I0422 17:33:56.126108   39355 main.go:141] libmachine: (ha-025067-m04) Calling .GetSSHUsername
	I0422 17:33:56.126274   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067-m04/id_rsa Username:docker}
	W0422 17:34:14.651353   39355 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.80:22: connect: no route to host
	W0422 17:34:14.651448   39355 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	E0422 17:34:14.651469   39355 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host
	I0422 17:34:14.651477   39355 status.go:257] ha-025067-m04 status: &{Name:ha-025067-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0422 17:34:14.651501   39355 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.80:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-025067 -n ha-025067
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-025067 logs -n 25: (1.914836203s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                      Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m04 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp testdata/cp-test.txt                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04:/home/docker/cp-test.txt                                          |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067:/home/docker/cp-test_ha-025067-m04_ha-025067.txt                      |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067 sudo cat                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067.txt                                |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m02:/home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m02 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt                            |           |         |         |                     |                     |
	| cp      | ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m03:/home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt              |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n                                                                | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | ha-025067-m04 sudo cat                                                          |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                        |           |         |         |                     |                     |
	| ssh     | ha-025067 ssh -n ha-025067-m03 sudo cat                                         | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC | 22 Apr 24 17:21 UTC |
	|         | /home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt                            |           |         |         |                     |                     |
	| node    | ha-025067 node stop m02 -v=7                                                    | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | ha-025067 node start m02 -v=7                                                   | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:23 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-025067 -v=7                                                          | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | -p ha-025067 -v=7                                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:24 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| start   | -p ha-025067 --wait=true -v=7                                                   | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:26 UTC | 22 Apr 24 17:31 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| node    | list -p ha-025067                                                               | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:31 UTC |                     |
	| node    | ha-025067 node delete m03 -v=7                                                  | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:31 UTC | 22 Apr 24 17:31 UTC |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	| stop    | ha-025067 stop -v=7                                                             | ha-025067 | jenkins | v1.33.0 | 22 Apr 24 17:31 UTC |                     |
	|         | --alsologtostderr                                                               |           |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:26:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:26:39.773338   36982 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:26:39.773457   36982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:26:39.773467   36982 out.go:304] Setting ErrFile to fd 2...
	I0422 17:26:39.773470   36982 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:26:39.773648   36982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:26:39.774180   36982 out.go:298] Setting JSON to false
	I0422 17:26:39.775116   36982 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4145,"bootTime":1713802655,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:26:39.775195   36982 start.go:139] virtualization: kvm guest
	I0422 17:26:39.777634   36982 out.go:177] * [ha-025067] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:26:39.779353   36982 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:26:39.779296   36982 notify.go:220] Checking for updates...
	I0422 17:26:39.781922   36982 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:26:39.783402   36982 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:26:39.784628   36982 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:26:39.785849   36982 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:26:39.787070   36982 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:26:39.788888   36982 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:26:39.788980   36982 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:26:39.789382   36982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:26:39.789449   36982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:26:39.804180   36982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46609
	I0422 17:26:39.804636   36982 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:26:39.805194   36982 main.go:141] libmachine: Using API Version  1
	I0422 17:26:39.805230   36982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:26:39.805541   36982 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:26:39.805755   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:26:39.840752   36982 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 17:26:39.842158   36982 start.go:297] selected driver: kvm2
	I0422 17:26:39.842175   36982 start.go:901] validating driver "kvm2" against &{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:
false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:26:39.842309   36982 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:26:39.842620   36982 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:26:39.842680   36982 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 17:26:39.857539   36982 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 17:26:39.858538   36982 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:26:39.858605   36982 cni.go:84] Creating CNI manager for ""
	I0422 17:26:39.858617   36982 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 17:26:39.858684   36982 start.go:340] cluster config:
	{Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tille
r:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:26:39.858819   36982 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:26:39.861594   36982 out.go:177] * Starting "ha-025067" primary control-plane node in "ha-025067" cluster
	I0422 17:26:39.862997   36982 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:26:39.863043   36982 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 17:26:39.863053   36982 cache.go:56] Caching tarball of preloaded images
	I0422 17:26:39.863165   36982 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:26:39.863182   36982 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:26:39.863288   36982 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/config.json ...
	I0422 17:26:39.863481   36982 start.go:360] acquireMachinesLock for ha-025067: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:26:39.863523   36982 start.go:364] duration metric: took 23.028µs to acquireMachinesLock for "ha-025067"
	I0422 17:26:39.863542   36982 start.go:96] Skipping create...Using existing machine configuration
	I0422 17:26:39.863550   36982 fix.go:54] fixHost starting: 
	I0422 17:26:39.863794   36982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:26:39.863829   36982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:26:39.877777   36982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0422 17:26:39.878191   36982 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:26:39.878692   36982 main.go:141] libmachine: Using API Version  1
	I0422 17:26:39.878712   36982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:26:39.879027   36982 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:26:39.879266   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:26:39.879443   36982 main.go:141] libmachine: (ha-025067) Calling .GetState
	I0422 17:26:39.880922   36982 fix.go:112] recreateIfNeeded on ha-025067: state=Running err=<nil>
	W0422 17:26:39.880938   36982 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 17:26:39.883730   36982 out.go:177] * Updating the running kvm2 "ha-025067" VM ...
	I0422 17:26:39.885028   36982 machine.go:94] provisionDockerMachine start ...
	I0422 17:26:39.885049   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:26:39.885236   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:39.887733   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:39.888228   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:39.888247   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:39.888398   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:39.888576   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:39.888730   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:39.888863   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:39.889001   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:39.889173   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:39.889186   36982 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 17:26:40.004628   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067
	
	I0422 17:26:40.004656   36982 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:26:40.004907   36982 buildroot.go:166] provisioning hostname "ha-025067"
	I0422 17:26:40.004940   36982 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:26:40.005159   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.007930   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.008342   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.008365   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.008547   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.008733   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.008886   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.009066   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.009277   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:40.009449   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:40.009461   36982 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-025067 && echo "ha-025067" | sudo tee /etc/hostname
	I0422 17:26:40.142485   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-025067
	
	I0422 17:26:40.142548   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.145459   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.145923   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.145961   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.146152   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.146345   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.146481   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.146616   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.146760   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:40.146944   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:40.146968   36982 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-025067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-025067/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-025067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:26:40.264502   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:26:40.264528   36982 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:26:40.264544   36982 buildroot.go:174] setting up certificates
	I0422 17:26:40.264551   36982 provision.go:84] configureAuth start
	I0422 17:26:40.264558   36982 main.go:141] libmachine: (ha-025067) Calling .GetMachineName
	I0422 17:26:40.264812   36982 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:26:40.267914   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.268333   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.268376   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.268627   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.270810   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.271269   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.271293   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.271413   36982 provision.go:143] copyHostCerts
	I0422 17:26:40.271446   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:26:40.271483   36982 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:26:40.271492   36982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:26:40.271572   36982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:26:40.271676   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:26:40.271703   36982 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:26:40.271714   36982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:26:40.271751   36982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:26:40.271873   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:26:40.271899   36982 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:26:40.271909   36982 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:26:40.271941   36982 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:26:40.272007   36982 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.ha-025067 san=[127.0.0.1 192.168.39.22 ha-025067 localhost minikube]
	I0422 17:26:40.557019   36982 provision.go:177] copyRemoteCerts
	I0422 17:26:40.557080   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:26:40.557102   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.560223   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.560595   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.560622   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.560760   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.560961   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.561156   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.561307   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:26:40.650872   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:26:40.650947   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:26:40.679611   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:26:40.679709   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0422 17:26:40.716656   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:26:40.716754   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 17:26:40.760045   36982 provision.go:87] duration metric: took 495.482143ms to configureAuth
	I0422 17:26:40.760071   36982 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:26:40.760305   36982 config.go:182] Loaded profile config "ha-025067": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:26:40.760385   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:26:40.763115   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.763513   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:26:40.763546   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:26:40.763708   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:26:40.763925   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.764066   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:26:40.764241   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:26:40.764392   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:26:40.764587   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:26:40.764604   36982 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:28:11.669894   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:28:11.669925   36982 machine.go:97] duration metric: took 1m31.784881509s to provisionDockerMachine
	I0422 17:28:11.669941   36982 start.go:293] postStartSetup for "ha-025067" (driver="kvm2")
	I0422 17:28:11.669954   36982 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:28:11.669976   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:11.670251   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:28:11.670305   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:11.673313   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.673768   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:11.673795   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.674016   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:11.674185   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.674341   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:11.674511   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:28:11.760640   36982 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:28:11.765114   36982 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:28:11.765136   36982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:28:11.765211   36982 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:28:11.765298   36982 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:28:11.765309   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:28:11.765386   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:28:11.776083   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:28:11.802189   36982 start.go:296] duration metric: took 132.232595ms for postStartSetup
	I0422 17:28:11.802230   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:11.802531   36982 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0422 17:28:11.802559   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:11.804882   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.805306   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:11.805331   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.805514   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:11.805680   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.805833   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:11.805992   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	W0422 17:28:11.890706   36982 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0422 17:28:11.890735   36982 fix.go:56] duration metric: took 1m32.027183955s for fixHost
	I0422 17:28:11.890763   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:11.893368   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.893705   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:11.893748   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:11.893877   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:11.894045   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.894200   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:11.894349   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:11.894486   36982 main.go:141] libmachine: Using SSH client type: native
	I0422 17:28:11.894681   36982 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.22 22 <nil> <nil>}
	I0422 17:28:11.894693   36982 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:28:12.010085   36982 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713806891.989147578
	
	I0422 17:28:12.010107   36982 fix.go:216] guest clock: 1713806891.989147578
	I0422 17:28:12.010114   36982 fix.go:229] Guest: 2024-04-22 17:28:11.989147578 +0000 UTC Remote: 2024-04-22 17:28:11.890747238 +0000 UTC m=+92.165568008 (delta=98.40034ms)
	I0422 17:28:12.010131   36982 fix.go:200] guest clock delta is within tolerance: 98.40034ms
	I0422 17:28:12.010136   36982 start.go:83] releasing machines lock for "ha-025067", held for 1m32.146602236s
	I0422 17:28:12.010155   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.010458   36982 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:28:12.012946   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.013365   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:12.013388   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.013526   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.013993   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.014171   36982 main.go:141] libmachine: (ha-025067) Calling .DriverName
	I0422 17:28:12.014265   36982 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:28:12.014327   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:12.014343   36982 ssh_runner.go:195] Run: cat /version.json
	I0422 17:28:12.014362   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHHostname
	I0422 17:28:12.016887   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017211   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017245   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:12.017267   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017397   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:12.017557   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:12.017604   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:12.017623   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:12.017680   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:12.017771   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHPort
	I0422 17:28:12.017839   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:28:12.017935   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHKeyPath
	I0422 17:28:12.018082   36982 main.go:141] libmachine: (ha-025067) Calling .GetSSHUsername
	I0422 17:28:12.018214   36982 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/ha-025067/id_rsa Username:docker}
	I0422 17:28:12.133366   36982 ssh_runner.go:195] Run: systemctl --version
	I0422 17:28:12.140018   36982 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:28:12.307651   36982 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 17:28:12.314358   36982 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:28:12.314430   36982 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:28:12.325562   36982 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 17:28:12.325590   36982 start.go:494] detecting cgroup driver to use...
	I0422 17:28:12.325652   36982 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:28:12.343610   36982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:28:12.359089   36982 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:28:12.359157   36982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:28:12.374436   36982 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:28:12.389361   36982 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:28:12.540222   36982 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:28:12.691473   36982 docker.go:233] disabling docker service ...
	I0422 17:28:12.691534   36982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:28:12.710923   36982 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:28:12.725551   36982 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:28:12.877489   36982 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:28:13.030445   36982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:28:13.044709   36982 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:28:13.065897   36982 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:28:13.065954   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.077198   36982 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:28:13.077259   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.087966   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.099022   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.110116   36982 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:28:13.120899   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.131337   36982 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.143635   36982 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:28:13.154572   36982 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:28:13.164479   36982 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:28:13.174198   36982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:28:13.328596   36982 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:28:15.591596   36982 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.262959858s)
	I0422 17:28:15.591636   36982 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:28:15.591691   36982 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:28:15.596873   36982 start.go:562] Will wait 60s for crictl version
	I0422 17:28:15.596954   36982 ssh_runner.go:195] Run: which crictl
	I0422 17:28:15.601220   36982 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:28:15.645216   36982 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:28:15.645295   36982 ssh_runner.go:195] Run: crio --version
	I0422 17:28:15.680613   36982 ssh_runner.go:195] Run: crio --version
	I0422 17:28:15.712048   36982 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:28:15.713251   36982 main.go:141] libmachine: (ha-025067) Calling .GetIP
	I0422 17:28:15.715865   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:15.716284   36982 main.go:141] libmachine: (ha-025067) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:2a:21", ip: ""} in network mk-ha-025067: {Iface:virbr1 ExpiryTime:2024-04-22 18:17:07 +0000 UTC Type:0 Mac:52:54:00:8b:2a:21 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-025067 Clientid:01:52:54:00:8b:2a:21}
	I0422 17:28:15.716311   36982 main.go:141] libmachine: (ha-025067) DBG | domain ha-025067 has defined IP address 192.168.39.22 and MAC address 52:54:00:8b:2a:21 in network mk-ha-025067
	I0422 17:28:15.716506   36982 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:28:15.721464   36982 kubeadm.go:877] updating cluster {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 17:28:15.721606   36982 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:28:15.721670   36982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:28:15.767510   36982 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:28:15.767531   36982 crio.go:433] Images already preloaded, skipping extraction
	I0422 17:28:15.767582   36982 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:28:15.805433   36982 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:28:15.805453   36982 cache_images.go:84] Images are preloaded, skipping loading
	I0422 17:28:15.805461   36982 kubeadm.go:928] updating node { 192.168.39.22 8443 v1.30.0 crio true true} ...
	I0422 17:28:15.805578   36982 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-025067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:28:15.805640   36982 ssh_runner.go:195] Run: crio config
	I0422 17:28:15.860135   36982 cni.go:84] Creating CNI manager for ""
	I0422 17:28:15.860154   36982 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0422 17:28:15.860162   36982 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 17:28:15.860182   36982 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.22 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-025067 NodeName:ha-025067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 17:28:15.860312   36982 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-025067"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 17:28:15.860330   36982 kube-vip.go:111] generating kube-vip config ...
	I0422 17:28:15.860366   36982 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0422 17:28:15.873243   36982 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0422 17:28:15.873377   36982 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0422 17:28:15.873431   36982 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:28:15.883898   36982 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 17:28:15.883975   36982 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0422 17:28:15.894183   36982 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0422 17:28:15.912857   36982 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:28:15.930861   36982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0422 17:28:15.981653   36982 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0422 17:28:16.038420   36982 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0422 17:28:16.044450   36982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:28:16.262632   36982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:28:16.279530   36982 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067 for IP: 192.168.39.22
	I0422 17:28:16.279557   36982 certs.go:194] generating shared ca certs ...
	I0422 17:28:16.279577   36982 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:28:16.279773   36982 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:28:16.279841   36982 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:28:16.279859   36982 certs.go:256] generating profile certs ...
	I0422 17:28:16.279969   36982 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/client.key
	I0422 17:28:16.280006   36982 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f
	I0422 17:28:16.280026   36982 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.22 192.168.39.56 192.168.39.220 192.168.39.254]
	I0422 17:28:16.379927   36982 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f ...
	I0422 17:28:16.379960   36982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f: {Name:mkcca004100db755f659718d99336dc23fea15d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:28:16.380155   36982 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f ...
	I0422 17:28:16.380172   36982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f: {Name:mk1fc3b370c6e75b85764b9a115dc2c170aa8ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:28:16.380272   36982 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt.27ba973f -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt
	I0422 17:28:16.380456   36982 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key.27ba973f -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key
	I0422 17:28:16.380589   36982 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key
	I0422 17:28:16.380606   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:28:16.380617   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:28:16.380630   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:28:16.380641   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:28:16.380651   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:28:16.380662   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:28:16.380679   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:28:16.380692   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:28:16.380740   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:28:16.380765   36982 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:28:16.380775   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:28:16.380795   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:28:16.380815   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:28:16.380837   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:28:16.380873   36982 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:28:16.380899   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.380912   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.380925   36982 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.381427   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:28:16.410294   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:28:16.438521   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:28:16.465847   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:28:16.493212   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 17:28:16.520182   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 17:28:16.547255   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:28:16.573323   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/ha-025067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:28:16.598513   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:28:16.624114   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:28:16.648891   36982 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:28:16.673889   36982 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 17:28:16.690995   36982 ssh_runner.go:195] Run: openssl version
	I0422 17:28:16.697122   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:28:16.708554   36982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.713279   36982 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.713322   36982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:28:16.719285   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:28:16.729638   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:28:16.741269   36982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.746056   36982 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.746103   36982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:28:16.752064   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:28:16.761667   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:28:16.772528   36982 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.777108   36982 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.777150   36982 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:28:16.782890   36982 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:28:16.792745   36982 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:28:16.797462   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 17:28:16.803577   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 17:28:16.809563   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 17:28:16.815662   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 17:28:16.821701   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 17:28:16.827932   36982 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 17:28:16.834152   36982 kubeadm.go:391] StartCluster: {Name:ha-025067 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-025067 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.56 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.220 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.80 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod
:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:28:16.834300   36982 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 17:28:16.834361   36982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 17:28:16.875599   36982 cri.go:89] found id: "709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55"
	I0422 17:28:16.875626   36982 cri.go:89] found id: "36ec323d3d57efa3a7865ea0b4c446d1d9693b29cfb5c8e4b0a8565ee2168e49"
	I0422 17:28:16.875632   36982 cri.go:89] found id: "53b06374b06dd939153ea52cde33ea4c9e5af1b0e71567ac085fa60e1b50dfcc"
	I0422 17:28:16.875637   36982 cri.go:89] found id: "28513ecdee88f049380a68df0b9401d304006fbcb6989f1646099457402abd21"
	I0422 17:28:16.875641   36982 cri.go:89] found id: "478a7702f9b4c7965c1fd709bf3d979b179890a418978a00a44bbf3e96db423f"
	I0422 17:28:16.875646   36982 cri.go:89] found id: "0b296211b78c6b52e828836706f135ff2f7d87792e805046d4ed6b64f100a063"
	I0422 17:28:16.875649   36982 cri.go:89] found id: "2c0e4f60a87d1a3c2f83f01e0fa5a6937f3791fa29bc400e9be66081cc41c0ca"
	I0422 17:28:16.875653   36982 cri.go:89] found id: "c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55"
	I0422 17:28:16.875658   36982 cri.go:89] found id: "524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0"
	I0422 17:28:16.875666   36982 cri.go:89] found id: "f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72"
	I0422 17:28:16.875670   36982 cri.go:89] found id: "ce4c01cd6ca7004ed0092511a9b307c2703767d4b8aab796d7b66cd6cd43e4e3"
	I0422 17:28:16.875683   36982 cri.go:89] found id: "b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f"
	I0422 17:28:16.875691   36982 cri.go:89] found id: "549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89"
	I0422 17:28:16.875695   36982 cri.go:89] found id: "819e89518583820dc26ad886e84d0a1b7015cd8e91eb249f05236a294e7fa158"
	I0422 17:28:16.875703   36982 cri.go:89] found id: "9bc987b1519c5e9379082f10bada889bc03631a79c6cc471e564f0269ba6f03b"
	I0422 17:28:16.875708   36982 cri.go:89] found id: ""
	I0422 17:28:16.875747   36982 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.342263474Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807255342226669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3d3e57e-b9ae-4fdb-87d0-3801d5162a25 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.343233334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5b32e7c-ed66-4064-bf0b-21aaf4bdd594 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.343313198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5b32e7c-ed66-4064-bf0b-21aaf4bdd594 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.344098609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5b32e7c-ed66-4064-bf0b-21aaf4bdd594 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.402014438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf0c6673-b10d-434d-9447-8e46cab8d96d name=/runtime.v1.RuntimeService/Version
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.402207467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf0c6673-b10d-434d-9447-8e46cab8d96d name=/runtime.v1.RuntimeService/Version
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.404571864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d491365b-ffb4-4fcb-b966-7e27512760f6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.405313872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807255405271848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d491365b-ffb4-4fcb-b966-7e27512760f6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.406521322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=353b9c2a-317e-42e6-9e76-cf22021f7749 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.406610063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=353b9c2a-317e-42e6-9e76-cf22021f7749 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.407129146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=353b9c2a-317e-42e6-9e76-cf22021f7749 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.455596863Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e3db74c-1c32-4b5f-b675-837522a4a998 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.455676856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e3db74c-1c32-4b5f-b675-837522a4a998 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.457004963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6f5e669-27a4-4527-9f29-f318d8bb1806 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.457555419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807255457534643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6f5e669-27a4-4527-9f29-f318d8bb1806 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.458334549Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96562f0e-06fe-4d8e-8652-0ac88a35adf4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.458388134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96562f0e-06fe-4d8e-8652-0ac88a35adf4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.458771268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96562f0e-06fe-4d8e-8652-0ac88a35adf4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.504571318Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=43fe15bc-7a44-4616-911c-015a04396fac name=/runtime.v1.RuntimeService/Version
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.504645337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=43fe15bc-7a44-4616-911c-015a04396fac name=/runtime.v1.RuntimeService/Version
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.505831354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=07a8809d-fdfc-4408-828c-ca0c75a34e13 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.506647979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713807255506621837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144960,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=07a8809d-fdfc-4408-828c-ca0c75a34e13 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.507273553Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76abfa64-7d7f-4c58-b577-a4d37102f2be name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.507347689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76abfa64-7d7f-4c58-b577-a4d37102f2be name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:34:15 ha-025067 crio[3943]: time="2024-04-22 17:34:15.507913935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94c2813a81b575edd4dc7855d3509b1f1569bf4360ca047235935c5669c24fc7,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713806996500461024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 5,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713806979480886361,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713806943471568982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernete
s.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713806940487524200,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70e83e476939c000e9255652118081a0bf7092410080ce0f28874ac917ff37b2,PodSandboxId:397ddd568c9014f7d18c49d1a3fb94b19c158b217d91994ba07cfdf50822d5e8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713806935637833890,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernetes.container.hash: d4c8323f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2291407bc9dbda5f44244cb7709b207c81880b3b6586ba70e1a04fd95c939933,PodSandboxId:69b798cf09790b35a24d2ae33e1d10c93f6130f7836ac0cced5316fb7e597a8b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713806916292720943,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4574d673dd5481f188735c2650f2f396,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminat
ionGracePeriod: 30,},},&Container{Id:638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231,PodSandboxId:87020be3f343a303bc477bd6feb1f28cf91dba74173d5f3c43c25f4e5f8d1ee2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713806902512755761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb,PodSandboxId:69eb4762e010cb6d4371a2b24675c8c73f5d64a62d6c6d1168e8a6d8d9cb9140,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713806902425227292,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:699b754e810591cf7919bf90a3745a3dc53bd122b0550f9
d614c7633e290e1ae,PodSandboxId:23783db6b701c880eb4004e43532b0d971b602b6add74fae3ff13aa95ff0e1b2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713806902528569418,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-tmxd9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d448df8-32a2-46e8-bcbf-fac5d147e45f,},Annotations:map[string]string{io.kubernetes.container.hash: fc29735f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611,PodSandboxId:69b0
bb93f5a8d050932e264b85c18317d2b7eea1713ffaf698c7f86d3ad95f0c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806902411916037,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f,PodSandboxId:b09e16f41fde55c49af2c8cfce5732b181b0368f87e0b427a78c15b07f6b84f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713806902239748150,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68d2fd8f-1b8b-48dd-a4f8-16c2a7636d6b,},Annotations:map[string]string{io.kubernetes.container.hash: 3e7d321,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.conta
iner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266,PodSandboxId:31b4c9e60ad47b674abe3d9d7eefb24a2087784cd5630ab93a3fd82ac09e72ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713806902228444441,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5,PodSandboxId:d19bd18d24b2b729d3e7954f35e43f3808316d767f64cf4fc38670b3d19df7b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713806902286747060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dafca65b718398ce567dba12ba2494c7,},Annotations:map[string]string{io.kubernetes.container.hash: af48d06d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kube
rnetes.pod.terminationGracePeriod: 30,},},&Container{Id:05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b,PodSandboxId:aad1cce9e2255e7faa00e65692c933a93bc6ae06b5e8d12a62de0cc166146064,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713806902142653148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd89f0fa3e1221316981adeb7afd503,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io
.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55,PodSandboxId:cd88edc27f6015702162351260ad8a352a760771d95693cf0283ad0faca03adf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713806896184254829,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\"
:9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983cb8537237fc8090d332646d8638bbfc4d08e65ad13e69bf103bccbddf6565,PodSandboxId:3c3abb6c214d4b7779c42ebf5f9d28ecae94aa4cc552e7d0796b861b7cc64ba4,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713806416877797792,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-l97ld,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ca33d56c-e408-4501-9462-76c58f2b23dd,},Annotations:map[string]string{io.kubernete
s.container.hash: d4c8323f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0,PodSandboxId:b553b11bb990b860ebe029f12a2db949d595645168cb860becee0ea3a1cb7326,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270541435204,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-nswqp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bedfb6c0-6553-4ec2-9318-d1997a2994e7,},Annotations:map[string]string{io.kubernetes.container.hash: f94bf13c,io.ku
bernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55,PodSandboxId:c2921baac16b32eefdb2585be234c307d71c5c780262ee1c5679c3fbe8326e04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713806270545385306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-
7db6d8ff4d-vrl4h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f1e548f-9dfd-4bb7-b13c-74e6ac8583f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2fd082b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72,PodSandboxId:052596614cf9ce66dd3bbccfb58bef17bae72920a8c5dc911c34f884b7d955bd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431
fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713806268358725937,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pf7cc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de4d571-9b5a-43ae-9808-4dbf5d1a5e26,},Annotations:map[string]string{io.kubernetes.container.hash: d05a9d69,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f,PodSandboxId:3c34eb37cd442329a3e9645c9cae0fb0dfa4f78efa40ae493b8bdd7806c329d3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d769
1a75a899,State:CONTAINER_EXITED,CreatedAt:1713806248147271957,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29630f2b98931e48da1483cad97880d6,},Annotations:map[string]string{io.kubernetes.container.hash: 9fabe011,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89,PodSandboxId:c0ff0dbc27bbd0bf354404610503ea26fc4e32b02fa650a1146550b89e1fcb6b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:
1713806248057162915,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-025067,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23538072fbf30b79e739fab4230ece56,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76abfa64-7d7f-4c58-b577-a4d37102f2be name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	94c2813a81b57       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       5                   b09e16f41fde5       storage-provisioner
	ad6996152a42b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               4                   23783db6b701c       kindnet-tmxd9
	d87a0b06fb028       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Running             kube-apiserver            3                   d19bd18d24b2b       kube-apiserver-ha-025067
	5b806d49ec726       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Running             kube-controller-manager   2                   aad1cce9e2255       kube-controller-manager-ha-025067
	70e83e476939c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   397ddd568c901       busybox-fc5497c4f-l97ld
	2291407bc9dbd       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  0                   69b798cf09790       kube-vip-ha-025067
	699b754e81059       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   23783db6b701c       kindnet-tmxd9
	638b2dd05dfbb       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      5 minutes ago       Running             kube-proxy                1                   87020be3f343a       kube-proxy-pf7cc
	efa14e10b593c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   69eb4762e010c       etcd-ha-025067
	9a1f08d9bc71f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   69b0bb93f5a8d       coredns-7db6d8ff4d-nswqp
	8b1b8494064dc       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      5 minutes ago       Exited              kube-apiserver            2                   d19bd18d24b2b       kube-apiserver-ha-025067
	251e837f0b8a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   b09e16f41fde5       storage-provisioner
	10e0c7cd8590b       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      5 minutes ago       Running             kube-scheduler            1                   31b4c9e60ad47       kube-scheduler-ha-025067
	05ea6b3902d85       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      5 minutes ago       Exited              kube-controller-manager   1                   aad1cce9e2255       kube-controller-manager-ha-025067
	709020245fe70       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   cd88edc27f601       coredns-7db6d8ff4d-vrl4h
	983cb8537237f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   3c3abb6c214d4       busybox-fc5497c4f-l97ld
	c0af820e7bd06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   c2921baac16b3       coredns-7db6d8ff4d-vrl4h
	524e02d80347d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   b553b11bb990b       coredns-7db6d8ff4d-nswqp
	f841dcb8dd09b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      16 minutes ago      Exited              kube-proxy                0                   052596614cf9c       kube-proxy-pf7cc
	b3d751e3e8f50       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   3c34eb37cd442       etcd-ha-025067
	549930f1d83f6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      16 minutes ago      Exited              kube-scheduler            0                   c0ff0dbc27bbd       kube-scheduler-ha-025067
	
	
	==> coredns [524e02d80347da747a8dabdaddc14aee5c6fc990b653dadec2bcc50c7745d5f0] <==
	[INFO] 10.244.1.2:49747 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000094083s
	[INFO] 10.244.1.2:39851 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094869s
	[INFO] 10.244.1.2:51921 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000132016s
	[INFO] 10.244.2.2:46485 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151891s
	[INFO] 10.244.2.2:52343 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00183731s
	[INFO] 10.244.2.2:36982 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162215s
	[INFO] 10.244.2.2:56193 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001471319s
	[INFO] 10.244.2.2:48503 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072359s
	[INFO] 10.244.2.2:35429 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006794s
	[INFO] 10.244.2.2:56484 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092002s
	[INFO] 10.244.0.4:39516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189987s
	[INFO] 10.244.0.4:60228 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082728s
	[INFO] 10.244.1.2:44703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203159s
	[INFO] 10.244.1.2:33524 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167155s
	[INFO] 10.244.1.2:43201 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098618s
	[INFO] 10.244.2.2:53563 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000215578s
	[INFO] 10.244.2.2:54616 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163304s
	[INFO] 10.244.0.4:49280 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092142s
	[INFO] 10.244.1.2:40544 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000116574s
	[INFO] 10.244.2.2:43384 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000249064s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [709020245fe70d7f18212fe1877b1bdde25fedf8c10c9d09f47cc67803400d55] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:48954->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:48954->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44606->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1117179347]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 17:28:34.718) (total time: 12714ms):
	Trace[1117179347]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44606->10.96.0.1:443: read: connection reset by peer 12714ms (17:28:47.432)
	Trace[1117179347]: [12.714168324s] [12.714168324s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:44606->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [9a1f08d9bc71f31266ca7175a17cd14000690d3fe1e813936c9530c2b7c31611] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38882->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:38882->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38880->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1333151839]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Apr-2024 17:28:36.263) (total time: 11167ms):
	Trace[1333151839]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38880->10.96.0.1:443: read: connection reset by peer 11167ms (17:28:47.431)
	Trace[1333151839]: [11.167963952s] [11.167963952s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:38880->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c0af820e7bd06a17c2443bb1eea7eeda574faf94fdbba533d0aacd7c8c3a7d55] <==
	[INFO] 10.244.0.4:44231 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.015736347s
	[INFO] 10.244.0.4:37322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115326s
	[INFO] 10.244.1.2:58538 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135694s
	[INFO] 10.244.1.2:51828 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153493s
	[INFO] 10.244.1.2:44556 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001447535s
	[INFO] 10.244.2.2:44901 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000139485s
	[INFO] 10.244.0.4:42667 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108865s
	[INFO] 10.244.0.4:54399 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073213s
	[INFO] 10.244.1.2:35127 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090826s
	[INFO] 10.244.2.2:52722 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185046s
	[INFO] 10.244.2.2:49596 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128238s
	[INFO] 10.244.0.4:59309 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125541s
	[INFO] 10.244.0.4:42344 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215786s
	[INFO] 10.244.0.4:34084 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000295612s
	[INFO] 10.244.1.2:50561 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016924s
	[INFO] 10.244.1.2:40185 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080135s
	[INFO] 10.244.1.2:58530 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083107s
	[INFO] 10.244.2.2:52310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147992s
	[INFO] 10.244.2.2:48499 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103149s
	[INFO] 10.244.2.2:60500 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00018474s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-025067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:17:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:34:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:34:11 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:34:11 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:34:11 +0000   Mon, 22 Apr 2024 17:17:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:34:11 +0000   Mon, 22 Apr 2024 17:17:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.22
	  Hostname:    ha-025067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73a664449fd9403194a5919e23b0871b
	  System UUID:                73a66444-9fd9-4031-94a5-919e23b0871b
	  Boot ID:                    4c2ace2e-318b-4b8f-bd1e-a5f6d5151f88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-l97ld              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-nswqp             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-vrl4h             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-025067                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-tmxd9                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-025067             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-025067    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-pf7cc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-025067             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-025067                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 5m10s              kube-proxy       
	  Normal   NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-025067 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-025067 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-025067 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   NodeReady                16m                kubelet          Node ha-025067 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Warning  ContainerGCFailed        6m42s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m5s               node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   RegisteredNode           4m58s              node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	  Normal   RegisteredNode           3m14s              node-controller  Node ha-025067 event: Registered Node ha-025067 in Controller
	
	
	Name:               ha-025067-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_18_41_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:18:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:34:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:29:48 +0000   Mon, 22 Apr 2024 17:29:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.56
	  Hostname:    ha-025067-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a1f034a156f4a3fb9cb79780785386e
	  System UUID:                8a1f034a-156f-4a3f-b9cb-79780785386e
	  Boot ID:                    0b380f4b-d8f3-41ee-8b4f-eb34838f377a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-m6qxt                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-025067-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-ctdzp                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-025067-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-025067-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-dk5ww                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-025067-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-025067-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m4s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-025067-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-025067-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-025067-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-025067-m02 status is now: NodeNotReady
	  Normal  Starting                 5m37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m37s (x8 over 5m37s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m37s (x8 over 5m37s)  kubelet          Node ha-025067-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m37s (x7 over 5m37s)  kubelet          Node ha-025067-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m5s                   node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-025067-m02 event: Registered Node ha-025067-m02 in Controller
	
	
	Name:               ha-025067-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-025067-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=ha-025067
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_20_51_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:20:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-025067-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:31:47 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:32:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:32:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:32:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 17:31:27 +0000   Mon, 22 Apr 2024 17:32:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.80
	  Hostname:    ha-025067-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bfe8f8092cda4851adcca8410e5437c9
	  System UUID:                bfe8f809-2cda-4851-adcc-a8410e5437c9
	  Boot ID:                    5499526e-ba23-4234-be24-d0b1e2a89439
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ccxz4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 kindnet-d6tpm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-kbhbk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 2m45s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-025067-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-025067-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-025067-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-025067-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m5s                   node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   RegisteredNode           4m58s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   NodeNotReady             4m25s                  node-controller  Node ha-025067-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-025067-m04 event: Registered Node ha-025067-m04 in Controller
	  Normal   Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m49s (x3 over 2m49s)  kubelet          Node ha-025067-m04 has been rebooted, boot id: 5499526e-ba23-4234-be24-d0b1e2a89439
	  Normal   NodeHasSufficientMemory  2m49s (x4 over 2m49s)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x4 over 2m49s)  kubelet          Node ha-025067-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x4 over 2m49s)  kubelet          Node ha-025067-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m49s                  kubelet          Node ha-025067-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-025067-m04 status is now: NodeReady
	  Normal   NodeNotReady             107s                   node-controller  Node ha-025067-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.717166] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.065948] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064117] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.195469] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.121015] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.285063] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.448511] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059431] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.181808] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.968933] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.287359] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.083571] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.934890] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 17:18] kauditd_printk_skb: 74 callbacks suppressed
	[Apr22 17:25] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 17:28] systemd-fstab-generator[3860]: Ignoring "noauto" option for root device
	[  +0.155389] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.184116] systemd-fstab-generator[3886]: Ignoring "noauto" option for root device
	[  +0.151233] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[  +0.286925] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +2.910594] systemd-fstab-generator[4086]: Ignoring "noauto" option for root device
	[  +5.545349] kauditd_printk_skb: 132 callbacks suppressed
	[ +12.865031] kauditd_printk_skb: 89 callbacks suppressed
	[ +10.801229] kauditd_printk_skb: 1 callbacks suppressed
	[Apr22 17:29] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [b3d751e3e8f50e9839922f2fb2d518d4cf620df5a1a7b6b9cfea870af356063f] <==
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/22 17:26:40 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-22T17:26:41.187704Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:26:41.187812Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.22:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T17:26:41.187917Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"cde0bb267fc4e559","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-22T17:26:41.188199Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188272Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188339Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188472Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188542Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188619Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188653Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1bcf6bb21b2d3021"}
	{"level":"info","ts":"2024-04-22T17:26:41.188677Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.18873Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.188808Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.188927Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.189106Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.189249Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.189289Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:26:41.192505Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-04-22T17:26:41.192651Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.22:2380"}
	{"level":"info","ts":"2024-04-22T17:26:41.19269Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-025067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.22:2380"],"advertise-client-urls":["https://192.168.39.22:2379"]}
	
	
	==> etcd [efa14e10b593c1d3cd6d39953f2249533d40c8b60b0e5df4b40274e1f0b9d4bb] <==
	{"level":"info","ts":"2024-04-22T17:30:45.794941Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:30:45.821755Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"cde0bb267fc4e559","to":"e9be776574408594","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-22T17:30:45.821822Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:30:45.827313Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"cde0bb267fc4e559","to":"e9be776574408594","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-22T17:30:45.827376Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:30:47.040794Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.720353ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16526397567203304466 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.22\" mod_revision:2371 > success:<request_put:<key:\"/registry/masterleases/192.168.39.22\" value_size:66 lease:7303025530348528656 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.22\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T17:30:47.041097Z","caller":"traceutil/trace.go:171","msg":"trace[1813464151] transaction","detail":"{read_only:false; response_revision:2408; number_of_response:1; }","duration":"201.73996ms","start":"2024-04-22T17:30:46.839282Z","end":"2024-04-22T17:30:47.041022Z","steps":["trace[1813464151] 'process raft request'  (duration: 68.495556ms)","trace[1813464151] 'compare'  (duration: 131.597074ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:30:57.485267Z","caller":"traceutil/trace.go:171","msg":"trace[1103622926] transaction","detail":"{read_only:false; response_revision:2462; number_of_response:1; }","duration":"101.077242ms","start":"2024-04-22T17:30:57.384174Z","end":"2024-04-22T17:30:57.485251Z","steps":["trace[1103622926] 'process raft request'  (duration: 100.985146ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:31:41.357554Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cde0bb267fc4e559 switched to configuration voters=(2003938771907457057 14835062946585175385)"}
	{"level":"info","ts":"2024-04-22T17:31:41.359962Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"eaed0234649c774e","local-member-id":"cde0bb267fc4e559","removed-remote-peer-id":"e9be776574408594","removed-remote-peer-urls":["https://192.168.39.220:2380"]}
	{"level":"info","ts":"2024-04-22T17:31:41.360128Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:31:41.360373Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:31:41.360422Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:31:41.361144Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:31:41.36144Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:31:41.361679Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:31:41.362184Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594","error":"context canceled"}
	{"level":"warn","ts":"2024-04-22T17:31:41.362258Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e9be776574408594","error":"failed to read e9be776574408594 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-22T17:31:41.36232Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:31:41.362567Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594","error":"context canceled"}
	{"level":"info","ts":"2024-04-22T17:31:41.362621Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"cde0bb267fc4e559","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:31:41.362666Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e9be776574408594"}
	{"level":"info","ts":"2024-04-22T17:31:41.362695Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"cde0bb267fc4e559","removed-remote-peer-id":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:31:41.382266Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"cde0bb267fc4e559","remote-peer-id-stream-handler":"cde0bb267fc4e559","remote-peer-id-from":"e9be776574408594"}
	{"level":"warn","ts":"2024-04-22T17:31:41.392367Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"cde0bb267fc4e559","remote-peer-id-stream-handler":"cde0bb267fc4e559","remote-peer-id-from":"e9be776574408594"}
	
	
	==> kernel <==
	 17:34:16 up 17 min,  0 users,  load average: 0.20, 0.43, 0.28
	Linux ha-025067 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [699b754e810591cf7919bf90a3745a3dc53bd122b0550f9d614c7633e290e1ae] <==
	I0422 17:28:22.932846       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0422 17:28:22.932935       1 main.go:107] hostIP = 192.168.39.22
	podIP = 192.168.39.22
	I0422 17:28:22.977581       1 main.go:116] setting mtu 1500 for CNI 
	I0422 17:28:22.977685       1 main.go:146] kindnetd IP family: "ipv4"
	I0422 17:28:22.977712       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0422 17:28:33.285584       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0422 17:28:35.143625       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 17:28:38.215771       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 17:28:41.287714       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0422 17:28:44.359728       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [ad6996152a42b1c00a08394cd38a691effc4c84242735186c80e9600480ef500] <==
	I0422 17:33:30.737957       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:33:40.746376       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:33:40.746435       1 main.go:227] handling current node
	I0422 17:33:40.746450       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:33:40.746458       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:33:40.746606       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:33:40.746642       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:33:50.762316       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:33:50.762363       1 main.go:227] handling current node
	I0422 17:33:50.762375       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:33:50.762381       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:33:50.762491       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:33:50.762526       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:34:00.776883       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:34:00.776941       1 main.go:227] handling current node
	I0422 17:34:00.776958       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:34:00.776973       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:34:00.777250       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:34:00.777294       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	I0422 17:34:10.784190       1 main.go:223] Handling node with IPs: map[192.168.39.22:{}]
	I0422 17:34:10.784282       1 main.go:227] handling current node
	I0422 17:34:10.784306       1 main.go:223] Handling node with IPs: map[192.168.39.56:{}]
	I0422 17:34:10.784324       1 main.go:250] Node ha-025067-m02 has CIDR [10.244.1.0/24] 
	I0422 17:34:10.784517       1 main.go:223] Handling node with IPs: map[192.168.39.80:{}]
	I0422 17:34:10.784554       1 main.go:250] Node ha-025067-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [8b1b8494064dcdd9e389d85301fce6f46505f2a76fafc1dfff5cbd4fc49d3be5] <==
	I0422 17:28:23.071110       1 options.go:221] external host was not specified, using 192.168.39.22
	I0422 17:28:23.106299       1 server.go:148] Version: v1.30.0
	I0422 17:28:23.106409       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:28:24.324796       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0422 17:28:24.330665       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 17:28:24.333948       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0422 17:28:24.334000       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0422 17:28:24.334254       1 instance.go:299] Using reconciler: lease
	W0422 17:28:44.324402       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0422 17:28:44.324862       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0422 17:28:44.335926       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [d87a0b06fb028f2e4b37e36479a0bb3233e9f0d59f981bfdf7ffb002b2fb8348] <==
	I0422 17:29:05.378512       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0422 17:29:05.378529       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0422 17:29:05.439822       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 17:29:05.440617       1 aggregator.go:165] initial CRD sync complete...
	I0422 17:29:05.440784       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 17:29:05.440813       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 17:29:05.441447       1 cache.go:39] Caches are synced for autoregister controller
	I0422 17:29:05.481565       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 17:29:05.483914       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 17:29:05.483975       1 policy_source.go:224] refreshing policies
	I0422 17:29:05.531136       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 17:29:05.536093       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 17:29:05.536163       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 17:29:05.536239       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 17:29:05.537003       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 17:29:05.537134       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 17:29:05.538363       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 17:29:05.541735       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0422 17:29:05.577284       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220 192.168.39.56]
	I0422 17:29:05.586584       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 17:29:05.612414       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0422 17:29:05.616597       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0422 17:29:06.351020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 17:29:06.838219       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.220 192.168.39.56]
	W0422 17:31:56.851361       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.22 192.168.39.56]
	
	
	==> kube-controller-manager [05ea6b3902d85d1a24a008dcead0247d60df36c9731bc2117c3ad8d9594a579b] <==
	I0422 17:28:23.752439       1 serving.go:380] Generated self-signed cert in-memory
	I0422 17:28:24.167927       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 17:28:24.167977       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:28:24.170896       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 17:28:24.171079       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 17:28:24.171663       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 17:28:24.171737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0422 17:28:45.344787       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.22:8443/healthz\": dial tcp 192.168.39.22:8443: connect: connection refused"
	
	
	==> kube-controller-manager [5b806d49ec72658d90809c1d41c5763079879365485ba462a19623f9f02dcad8] <==
	E0422 17:32:18.918950       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:18.918958       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:18.918964       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:18.918969       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	I0422 17:32:29.140893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.984024ms"
	I0422 17:32:29.141104       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.251µs"
	E0422 17:32:38.919263       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:38.919413       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:38.919443       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:38.919469       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	E0422 17:32:38.919493       1 gc_controller.go:153] "Failed to get node" err="node \"ha-025067-m03\" not found" logger="pod-garbage-collector-controller" node="ha-025067-m03"
	I0422 17:32:38.932847       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-025067-m03"
	I0422 17:32:38.966899       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-025067-m03"
	I0422 17:32:38.966957       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-025067-m03"
	I0422 17:32:38.998735       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-025067-m03"
	I0422 17:32:38.998881       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-025067-m03"
	I0422 17:32:39.033426       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-025067-m03"
	I0422 17:32:39.033582       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ztcgm"
	I0422 17:32:39.068864       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-ztcgm"
	I0422 17:32:39.068922       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-wsr9x"
	I0422 17:32:39.104707       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-wsr9x"
	I0422 17:32:39.104898       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-025067-m03"
	I0422 17:32:39.135013       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-025067-m03"
	I0422 17:32:39.135263       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-025067-m03"
	I0422 17:32:39.166736       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-025067-m03"
	
	
	==> kube-proxy [638b2dd05dfbb1e518bb7bdaa5cc27347ea1e0f08e2370017903cf70c8868231] <==
	I0422 17:28:24.336531       1 server_linux.go:69] "Using iptables proxy"
	E0422 17:28:26.952527       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:30.024282       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:33.096220       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:39.242317       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0422 17:28:48.455755       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-025067\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0422 17:29:05.607339       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.22"]
	I0422 17:29:05.665981       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:29:05.666124       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:29:05.666163       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:29:05.669443       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:29:05.669961       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:29:05.670020       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:29:05.672567       1 config.go:192] "Starting service config controller"
	I0422 17:29:05.672656       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:29:05.672788       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:29:05.672874       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:29:05.673976       1 config.go:319] "Starting node config controller"
	I0422 17:29:05.674020       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:29:05.773853       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:29:05.773930       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:29:05.774404       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f841dcb8dd09bd9c83b34bb62b6365bc6538afe9364e2ede569b7ea0a664ca72] <==
	E0422 17:25:37.994304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:41.064302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:41.064635       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:41.064833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:41.065357       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:41.065491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:41.065632       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:47.209155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:47.209244       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:47.209338       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:47.209413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:47.209486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:47.209529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:56.425639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:56.426398       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:25:59.496576       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:25:59.496708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:02.568615       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:02.568807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:11.785431       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:11.785512       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-025067&resourceVersion=1707": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:24.074288       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:24.074502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1816": dial tcp 192.168.39.254:8443: connect: no route to host
	W0422 17:26:27.144183       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	E0422 17:26:27.144246       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1732": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [10e0c7cd8590bd7963ed9134131f25235cb6dcb8c5c2e15fdfa3e1d8ab079266] <==
	W0422 17:29:00.248383       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.22:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:00.248515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.22:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.063657       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.063718       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.22:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.331333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.22:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.331396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.22:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.497635       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.22:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.497749       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.22:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:02.727741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.22:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:02.727803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.22:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:03.363886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.22:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	E0422 17:29:03.363948       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.22:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.22:8443: connect: connection refused
	W0422 17:29:05.418464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:29:05.419019       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:29:05.418849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:29:05.419335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 17:29:05.418914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:29:05.419478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:29:05.418958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:29:05.419667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 17:29:05.421363       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:29:05.421473       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 17:29:22.055126       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 17:31:38.013420       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w97gv\": pod busybox-fc5497c4f-w97gv is already assigned to node \"ha-025067-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-w97gv" node="ha-025067-m04"
	E0422 17:31:38.013641       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-w97gv\": pod busybox-fc5497c4f-w97gv is already assigned to node \"ha-025067-m04\"" pod="default/busybox-fc5497c4f-w97gv"
	
	
	==> kube-scheduler [549930f1d83f6e16f2b41fc624922f9ab6db01ed14473909c69e44c70ce27a89] <==
	W0422 17:26:33.232965       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:33.232999       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 17:26:33.323725       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:26:33.323785       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 17:26:33.619457       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:26:33.619517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:26:33.632550       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:26:33.632574       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:26:33.866280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:26:33.866421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 17:26:33.908821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:33.908922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 17:26:34.117983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:26:34.118076       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 17:26:34.736974       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:34.737085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 17:26:34.867445       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:26:34.867475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 17:26:35.728687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:26:35.728807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:26:40.289716       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 17:26:40.289876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 17:26:40.710556       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:40.710598       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:26:40.897602       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 17:29:45 ha-025067 kubelet[1368]: I0422 17:29:45.460571    1368 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-025067" podUID="8c381060-83d4-411b-98ac-c6b1842cd3d8"
	Apr 22 17:29:45 ha-025067 kubelet[1368]: I0422 17:29:45.481007    1368 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-025067"
	Apr 22 17:29:56 ha-025067 kubelet[1368]: I0422 17:29:56.463942    1368 scope.go:117] "RemoveContainer" containerID="251e837f0b8a0c2b22e41543ba1c1977df36d1c494f66f6a4877dfed3b63195f"
	Apr 22 17:29:59 ha-025067 kubelet[1368]: I0422 17:29:59.264908    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-l97ld" podStartSLOduration=583.649191038 podStartE2EDuration="9m46.264878956s" podCreationTimestamp="2024-04-22 17:20:13 +0000 UTC" firstStartedPulling="2024-04-22 17:20:14.247298805 +0000 UTC m=+159.963554649" lastFinishedPulling="2024-04-22 17:20:16.862986722 +0000 UTC m=+162.579242567" observedRunningTime="2024-04-22 17:20:17.257430596 +0000 UTC m=+162.973686464" watchObservedRunningTime="2024-04-22 17:29:59.264878956 +0000 UTC m=+744.981134820"
	Apr 22 17:29:59 ha-025067 kubelet[1368]: I0422 17:29:59.286984    1368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-025067" podStartSLOduration=14.286963447 podStartE2EDuration="14.286963447s" podCreationTimestamp="2024-04-22 17:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-22 17:29:59.283570209 +0000 UTC m=+744.999826071" watchObservedRunningTime="2024-04-22 17:29:59.286963447 +0000 UTC m=+745.003219311"
	Apr 22 17:30:34 ha-025067 kubelet[1368]: E0422 17:30:34.512436    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:30:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:30:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:30:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:30:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:31:34 ha-025067 kubelet[1368]: E0422 17:31:34.518595    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:31:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:31:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:31:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:31:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:32:34 ha-025067 kubelet[1368]: E0422 17:32:34.512881    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:32:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:32:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:32:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:32:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:33:34 ha-025067 kubelet[1368]: E0422 17:33:34.511795    1368 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:33:34 ha-025067 kubelet[1368]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:33:34 ha-025067 kubelet[1368]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:33:34 ha-025067 kubelet[1368]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:33:34 ha-025067 kubelet[1368]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:34:15.014211   39515 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-025067 -n ha-025067
helpers_test.go:261: (dbg) Run:  kubectl --context ha-025067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-704531
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-704531
E0422 17:50:07.901778   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-704531: exit status 82 (2m1.999735099s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-704531-m03"  ...
	* Stopping node "multinode-704531-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-704531" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-704531 --wait=true -v=8 --alsologtostderr
E0422 17:51:19.002661   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-704531 --wait=true -v=8 --alsologtostderr: (3m9.211495355s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-704531
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-704531 -n multinode-704531
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-704531 logs -n 25: (1.638933392s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile478955910/001/cp-test_multinode-704531-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531:/home/docker/cp-test_multinode-704531-m02_multinode-704531.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531 sudo cat                                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m02_multinode-704531.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03:/home/docker/cp-test_multinode-704531-m02_multinode-704531-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531-m03 sudo cat                                   | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m02_multinode-704531-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp testdata/cp-test.txt                                                | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile478955910/001/cp-test_multinode-704531-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531:/home/docker/cp-test_multinode-704531-m03_multinode-704531.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531 sudo cat                                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m03_multinode-704531.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02:/home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531-m02 sudo cat                                   | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-704531 node stop m03                                                          | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	| node    | multinode-704531 node start                                                             | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:48 UTC |                     |
	| stop    | -p multinode-704531                                                                     | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:48 UTC |                     |
	| start   | -p multinode-704531                                                                     | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:50 UTC | 22 Apr 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:50:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:50:19.843688   48612 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:50:19.844232   48612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:50:19.844252   48612 out.go:304] Setting ErrFile to fd 2...
	I0422 17:50:19.844258   48612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:50:19.844731   48612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:50:19.845680   48612 out.go:298] Setting JSON to false
	I0422 17:50:19.846712   48612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5565,"bootTime":1713802655,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:50:19.846777   48612 start.go:139] virtualization: kvm guest
	I0422 17:50:19.848615   48612 out.go:177] * [multinode-704531] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:50:19.850361   48612 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:50:19.850363   48612 notify.go:220] Checking for updates...
	I0422 17:50:19.851846   48612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:50:19.853317   48612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:50:19.854556   48612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:50:19.855752   48612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:50:19.856827   48612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:50:19.858500   48612 config.go:182] Loaded profile config "multinode-704531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:50:19.858588   48612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:50:19.859020   48612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:50:19.859060   48612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:50:19.873670   48612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0422 17:50:19.874043   48612 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:50:19.874599   48612 main.go:141] libmachine: Using API Version  1
	I0422 17:50:19.874619   48612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:50:19.874974   48612 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:50:19.875170   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:50:19.910342   48612 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 17:50:19.911625   48612 start.go:297] selected driver: kvm2
	I0422 17:50:19.911640   48612 start.go:901] validating driver "kvm2" against &{Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:50:19.911764   48612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:50:19.912057   48612 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:50:19.912118   48612 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 17:50:19.927667   48612 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 17:50:19.928319   48612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:50:19.928384   48612 cni.go:84] Creating CNI manager for ""
	I0422 17:50:19.928396   48612 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 17:50:19.928458   48612 start.go:340] cluster config:
	{Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-704531 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:50:19.928582   48612 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:50:19.930932   48612 out.go:177] * Starting "multinode-704531" primary control-plane node in "multinode-704531" cluster
	I0422 17:50:19.932058   48612 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:50:19.932095   48612 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 17:50:19.932112   48612 cache.go:56] Caching tarball of preloaded images
	I0422 17:50:19.932180   48612 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:50:19.932191   48612 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:50:19.932323   48612 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/config.json ...
	I0422 17:50:19.932506   48612 start.go:360] acquireMachinesLock for multinode-704531: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:50:19.932546   48612 start.go:364] duration metric: took 22.372µs to acquireMachinesLock for "multinode-704531"
	I0422 17:50:19.932567   48612 start.go:96] Skipping create...Using existing machine configuration
	I0422 17:50:19.932574   48612 fix.go:54] fixHost starting: 
	I0422 17:50:19.932837   48612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:50:19.932877   48612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:50:19.946948   48612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0422 17:50:19.947360   48612 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:50:19.947834   48612 main.go:141] libmachine: Using API Version  1
	I0422 17:50:19.947855   48612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:50:19.948210   48612 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:50:19.948381   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:50:19.948521   48612 main.go:141] libmachine: (multinode-704531) Calling .GetState
	I0422 17:50:19.950064   48612 fix.go:112] recreateIfNeeded on multinode-704531: state=Running err=<nil>
	W0422 17:50:19.950094   48612 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 17:50:19.952158   48612 out.go:177] * Updating the running kvm2 "multinode-704531" VM ...
	I0422 17:50:19.953397   48612 machine.go:94] provisionDockerMachine start ...
	I0422 17:50:19.953413   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:50:19.953609   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:19.956211   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:19.956629   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:19.956649   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:19.956784   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:19.956954   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:19.957101   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:19.957206   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:19.957328   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:19.957513   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:19.957524   48612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 17:50:20.067688   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-704531
	
	I0422 17:50:20.067715   48612 main.go:141] libmachine: (multinode-704531) Calling .GetMachineName
	I0422 17:50:20.067975   48612 buildroot.go:166] provisioning hostname "multinode-704531"
	I0422 17:50:20.067995   48612 main.go:141] libmachine: (multinode-704531) Calling .GetMachineName
	I0422 17:50:20.068243   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.071396   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.071781   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.071815   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.071966   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.072170   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.072351   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.072492   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.072671   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:20.072897   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:20.072915   48612 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-704531 && echo "multinode-704531" | sudo tee /etc/hostname
	I0422 17:50:20.188990   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-704531
	
	I0422 17:50:20.189019   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.191682   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.192007   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.192041   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.192164   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.192362   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.192527   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.192669   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.192845   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:20.193037   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:20.193061   48612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-704531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-704531/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-704531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:50:20.292284   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:50:20.292311   48612 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:50:20.292332   48612 buildroot.go:174] setting up certificates
	I0422 17:50:20.292342   48612 provision.go:84] configureAuth start
	I0422 17:50:20.292353   48612 main.go:141] libmachine: (multinode-704531) Calling .GetMachineName
	I0422 17:50:20.292622   48612 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:50:20.295550   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.295970   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.296000   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.296122   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.298381   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.298794   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.298831   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.299067   48612 provision.go:143] copyHostCerts
	I0422 17:50:20.299096   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:50:20.299141   48612 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:50:20.299153   48612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:50:20.299233   48612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:50:20.299364   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:50:20.299397   48612 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:50:20.299407   48612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:50:20.299448   48612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:50:20.299528   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:50:20.299551   48612 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:50:20.299561   48612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:50:20.299595   48612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:50:20.299665   48612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.multinode-704531 san=[127.0.0.1 192.168.39.41 localhost minikube multinode-704531]
	I0422 17:50:20.521271   48612 provision.go:177] copyRemoteCerts
	I0422 17:50:20.521348   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:50:20.521386   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.524527   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.524910   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.524954   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.525158   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.525342   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.525525   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.525657   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:50:20.615922   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:50:20.615993   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 17:50:20.643917   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:50:20.643987   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:50:20.677133   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:50:20.677223   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0422 17:50:20.704850   48612 provision.go:87] duration metric: took 412.494646ms to configureAuth
	I0422 17:50:20.704875   48612 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:50:20.705165   48612 config.go:182] Loaded profile config "multinode-704531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:50:20.705283   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.707758   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.708147   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.708179   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.708289   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.708532   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.708715   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.708880   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.709092   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:20.709254   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:20.709317   48612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:51:51.422644   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:51:51.422681   48612 machine.go:97] duration metric: took 1m31.469273052s to provisionDockerMachine
	I0422 17:51:51.422696   48612 start.go:293] postStartSetup for "multinode-704531" (driver="kvm2")
	I0422 17:51:51.422709   48612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:51:51.422757   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.423077   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:51:51.423108   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.426246   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.426762   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.426782   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.426947   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.427138   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.427341   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.427505   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:51:51.511491   48612 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:51:51.515965   48612 command_runner.go:130] > NAME=Buildroot
	I0422 17:51:51.516000   48612 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 17:51:51.516006   48612 command_runner.go:130] > ID=buildroot
	I0422 17:51:51.516013   48612 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 17:51:51.516021   48612 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 17:51:51.516060   48612 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:51:51.516078   48612 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:51:51.516143   48612 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:51:51.516238   48612 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:51:51.516250   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:51:51.516362   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:51:51.526355   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:51:51.553303   48612 start.go:296] duration metric: took 130.592949ms for postStartSetup
	I0422 17:51:51.553343   48612 fix.go:56] duration metric: took 1m31.620768585s for fixHost
	I0422 17:51:51.553361   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.556450   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.556821   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.556848   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.557040   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.557255   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.557412   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.557543   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.557735   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:51:51.557918   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:51:51.557928   48612 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:51:51.656641   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713808311.638286955
	
	I0422 17:51:51.656667   48612 fix.go:216] guest clock: 1713808311.638286955
	I0422 17:51:51.656677   48612 fix.go:229] Guest: 2024-04-22 17:51:51.638286955 +0000 UTC Remote: 2024-04-22 17:51:51.553346745 +0000 UTC m=+91.755843684 (delta=84.94021ms)
	I0422 17:51:51.656707   48612 fix.go:200] guest clock delta is within tolerance: 84.94021ms
	I0422 17:51:51.656712   48612 start.go:83] releasing machines lock for "multinode-704531", held for 1m31.724158207s
	I0422 17:51:51.656730   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.657020   48612 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:51:51.659553   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.659941   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.659967   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.660134   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.660679   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.660869   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.660963   48612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:51:51.661004   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.661103   48612 ssh_runner.go:195] Run: cat /version.json
	I0422 17:51:51.661132   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.663695   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664027   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.664050   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664069   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664222   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.664409   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.664565   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.664610   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.664648   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664696   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:51:51.664786   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.664940   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.665081   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.665208   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:51:51.773219   48612 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 17:51:51.773296   48612 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0422 17:51:51.773450   48612 ssh_runner.go:195] Run: systemctl --version
	I0422 17:51:51.779579   48612 command_runner.go:130] > systemd 252 (252)
	I0422 17:51:51.779622   48612 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0422 17:51:51.779675   48612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:51:51.944487   48612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 17:51:51.953445   48612 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 17:51:51.953504   48612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:51:51.953566   48612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:51:51.964532   48612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 17:51:51.964562   48612 start.go:494] detecting cgroup driver to use...
	I0422 17:51:51.964617   48612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:51:51.983265   48612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:51:51.998394   48612 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:51:51.998481   48612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:51:52.012603   48612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:51:52.026650   48612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:51:52.174809   48612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:51:52.322351   48612 docker.go:233] disabling docker service ...
	I0422 17:51:52.322428   48612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:51:52.339314   48612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:51:52.353867   48612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:51:52.501156   48612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:51:52.643723   48612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:51:52.657849   48612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:51:52.677713   48612 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0422 17:51:52.678207   48612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:51:52.678273   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.689700   48612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:51:52.689774   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.700614   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.711418   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.721969   48612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:51:52.732895   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.743446   48612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.755068   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.765741   48612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:51:52.775624   48612 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 17:51:52.775710   48612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:51:52.785602   48612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:51:52.923233   48612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:52:01.145001   48612 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.221728274s)
	I0422 17:52:01.145035   48612 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:52:01.145100   48612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:52:01.150281   48612 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0422 17:52:01.150310   48612 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0422 17:52:01.150320   48612 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0422 17:52:01.150330   48612 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 17:52:01.150338   48612 command_runner.go:130] > Access: 2024-04-22 17:52:01.085298137 +0000
	I0422 17:52:01.150354   48612 command_runner.go:130] > Modify: 2024-04-22 17:52:01.016296209 +0000
	I0422 17:52:01.150362   48612 command_runner.go:130] > Change: 2024-04-22 17:52:01.016296209 +0000
	I0422 17:52:01.150368   48612 command_runner.go:130] >  Birth: -
	I0422 17:52:01.150443   48612 start.go:562] Will wait 60s for crictl version
	I0422 17:52:01.150509   48612 ssh_runner.go:195] Run: which crictl
	I0422 17:52:01.154457   48612 command_runner.go:130] > /usr/bin/crictl
	I0422 17:52:01.154519   48612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:52:01.193804   48612 command_runner.go:130] > Version:  0.1.0
	I0422 17:52:01.193833   48612 command_runner.go:130] > RuntimeName:  cri-o
	I0422 17:52:01.193852   48612 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0422 17:52:01.193861   48612 command_runner.go:130] > RuntimeApiVersion:  v1
	I0422 17:52:01.193881   48612 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:52:01.193959   48612 ssh_runner.go:195] Run: crio --version
	I0422 17:52:01.225943   48612 command_runner.go:130] > crio version 1.29.1
	I0422 17:52:01.225964   48612 command_runner.go:130] > Version:        1.29.1
	I0422 17:52:01.225970   48612 command_runner.go:130] > GitCommit:      unknown
	I0422 17:52:01.225974   48612 command_runner.go:130] > GitCommitDate:  unknown
	I0422 17:52:01.225978   48612 command_runner.go:130] > GitTreeState:   clean
	I0422 17:52:01.225984   48612 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0422 17:52:01.225988   48612 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 17:52:01.225992   48612 command_runner.go:130] > Compiler:       gc
	I0422 17:52:01.225996   48612 command_runner.go:130] > Platform:       linux/amd64
	I0422 17:52:01.226000   48612 command_runner.go:130] > Linkmode:       dynamic
	I0422 17:52:01.226005   48612 command_runner.go:130] > BuildTags:      
	I0422 17:52:01.226010   48612 command_runner.go:130] >   containers_image_ostree_stub
	I0422 17:52:01.226016   48612 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 17:52:01.226021   48612 command_runner.go:130] >   btrfs_noversion
	I0422 17:52:01.226027   48612 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 17:52:01.226032   48612 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 17:52:01.226037   48612 command_runner.go:130] >   seccomp
	I0422 17:52:01.226045   48612 command_runner.go:130] > LDFlags:          unknown
	I0422 17:52:01.226055   48612 command_runner.go:130] > SeccompEnabled:   true
	I0422 17:52:01.226061   48612 command_runner.go:130] > AppArmorEnabled:  false
	I0422 17:52:01.226168   48612 ssh_runner.go:195] Run: crio --version
	I0422 17:52:01.256066   48612 command_runner.go:130] > crio version 1.29.1
	I0422 17:52:01.256093   48612 command_runner.go:130] > Version:        1.29.1
	I0422 17:52:01.256102   48612 command_runner.go:130] > GitCommit:      unknown
	I0422 17:52:01.256108   48612 command_runner.go:130] > GitCommitDate:  unknown
	I0422 17:52:01.256114   48612 command_runner.go:130] > GitTreeState:   clean
	I0422 17:52:01.256122   48612 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0422 17:52:01.256136   48612 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 17:52:01.256140   48612 command_runner.go:130] > Compiler:       gc
	I0422 17:52:01.256145   48612 command_runner.go:130] > Platform:       linux/amd64
	I0422 17:52:01.256149   48612 command_runner.go:130] > Linkmode:       dynamic
	I0422 17:52:01.256154   48612 command_runner.go:130] > BuildTags:      
	I0422 17:52:01.256159   48612 command_runner.go:130] >   containers_image_ostree_stub
	I0422 17:52:01.256163   48612 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 17:52:01.256176   48612 command_runner.go:130] >   btrfs_noversion
	I0422 17:52:01.256183   48612 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 17:52:01.256190   48612 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 17:52:01.256196   48612 command_runner.go:130] >   seccomp
	I0422 17:52:01.256211   48612 command_runner.go:130] > LDFlags:          unknown
	I0422 17:52:01.256218   48612 command_runner.go:130] > SeccompEnabled:   true
	I0422 17:52:01.256227   48612 command_runner.go:130] > AppArmorEnabled:  false
	I0422 17:52:01.258973   48612 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:52:01.260464   48612 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:52:01.263224   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:52:01.263677   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:52:01.263704   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:52:01.263915   48612 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:52:01.268461   48612 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0422 17:52:01.268598   48612 kubeadm.go:877] updating cluster {Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 17:52:01.268759   48612 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:52:01.268817   48612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:52:01.314617   48612 command_runner.go:130] > {
	I0422 17:52:01.314637   48612 command_runner.go:130] >   "images": [
	I0422 17:52:01.314641   48612 command_runner.go:130] >     {
	I0422 17:52:01.314659   48612 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 17:52:01.314664   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.314673   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 17:52:01.314682   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314689   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.314703   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 17:52:01.314714   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 17:52:01.314719   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314726   48612 command_runner.go:130] >       "size": "65291810",
	I0422 17:52:01.314732   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.314741   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.314748   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.314757   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.314761   48612 command_runner.go:130] >     },
	I0422 17:52:01.314765   48612 command_runner.go:130] >     {
	I0422 17:52:01.314771   48612 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 17:52:01.314775   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.314780   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 17:52:01.314794   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314801   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.314809   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 17:52:01.314824   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 17:52:01.314834   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314841   48612 command_runner.go:130] >       "size": "1363676",
	I0422 17:52:01.314851   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.314862   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.314867   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.314874   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.314877   48612 command_runner.go:130] >     },
	I0422 17:52:01.314882   48612 command_runner.go:130] >     {
	I0422 17:52:01.314888   48612 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 17:52:01.314894   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.314899   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 17:52:01.314905   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314909   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.314923   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 17:52:01.314938   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 17:52:01.314948   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314958   48612 command_runner.go:130] >       "size": "31470524",
	I0422 17:52:01.314967   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.314977   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.314985   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.314993   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.314997   48612 command_runner.go:130] >     },
	I0422 17:52:01.315003   48612 command_runner.go:130] >     {
	I0422 17:52:01.315009   48612 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 17:52:01.315015   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315021   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 17:52:01.315029   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315039   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315055   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 17:52:01.315080   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 17:52:01.315090   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315097   48612 command_runner.go:130] >       "size": "61245718",
	I0422 17:52:01.315111   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.315118   48612 command_runner.go:130] >       "username": "nonroot",
	I0422 17:52:01.315138   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315149   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315154   48612 command_runner.go:130] >     },
	I0422 17:52:01.315161   48612 command_runner.go:130] >     {
	I0422 17:52:01.315175   48612 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 17:52:01.315191   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315202   48612 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 17:52:01.315210   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315220   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315228   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 17:52:01.315242   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 17:52:01.315251   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315258   48612 command_runner.go:130] >       "size": "150779692",
	I0422 17:52:01.315267   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315275   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315283   48612 command_runner.go:130] >       },
	I0422 17:52:01.315290   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315300   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315310   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315316   48612 command_runner.go:130] >     },
	I0422 17:52:01.315324   48612 command_runner.go:130] >     {
	I0422 17:52:01.315332   48612 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 17:52:01.315342   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315351   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 17:52:01.315360   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315367   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315382   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 17:52:01.315402   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 17:52:01.315411   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315421   48612 command_runner.go:130] >       "size": "117609952",
	I0422 17:52:01.315430   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315436   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315440   48612 command_runner.go:130] >       },
	I0422 17:52:01.315449   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315466   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315481   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315490   48612 command_runner.go:130] >     },
	I0422 17:52:01.315498   48612 command_runner.go:130] >     {
	I0422 17:52:01.315511   48612 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 17:52:01.315520   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315527   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 17:52:01.315534   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315544   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315560   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 17:52:01.315577   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 17:52:01.315586   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315595   48612 command_runner.go:130] >       "size": "112170310",
	I0422 17:52:01.315604   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315613   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315621   48612 command_runner.go:130] >       },
	I0422 17:52:01.315628   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315634   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315643   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315652   48612 command_runner.go:130] >     },
	I0422 17:52:01.315658   48612 command_runner.go:130] >     {
	I0422 17:52:01.315671   48612 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 17:52:01.315680   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315692   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 17:52:01.315701   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315711   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315740   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 17:52:01.315762   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 17:52:01.315768   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315779   48612 command_runner.go:130] >       "size": "85932953",
	I0422 17:52:01.315788   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.315798   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315805   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315811   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315817   48612 command_runner.go:130] >     },
	I0422 17:52:01.315822   48612 command_runner.go:130] >     {
	I0422 17:52:01.315837   48612 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 17:52:01.315841   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315848   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 17:52:01.315853   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315860   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315875   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 17:52:01.315887   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 17:52:01.315893   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315900   48612 command_runner.go:130] >       "size": "63026502",
	I0422 17:52:01.315905   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315912   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315917   48612 command_runner.go:130] >       },
	I0422 17:52:01.315922   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315925   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315930   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315935   48612 command_runner.go:130] >     },
	I0422 17:52:01.315939   48612 command_runner.go:130] >     {
	I0422 17:52:01.315949   48612 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 17:52:01.315966   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315976   48612 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 17:52:01.315981   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315990   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.316002   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 17:52:01.316012   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 17:52:01.316016   48612 command_runner.go:130] >       ],
	I0422 17:52:01.316023   48612 command_runner.go:130] >       "size": "750414",
	I0422 17:52:01.316031   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.316038   48612 command_runner.go:130] >         "value": "65535"
	I0422 17:52:01.316047   48612 command_runner.go:130] >       },
	I0422 17:52:01.316054   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.316063   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.316072   48612 command_runner.go:130] >       "pinned": true
	I0422 17:52:01.316080   48612 command_runner.go:130] >     }
	I0422 17:52:01.316086   48612 command_runner.go:130] >   ]
	I0422 17:52:01.316094   48612 command_runner.go:130] > }
	I0422 17:52:01.316412   48612 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:52:01.316427   48612 crio.go:433] Images already preloaded, skipping extraction
	I0422 17:52:01.316483   48612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:52:01.350905   48612 command_runner.go:130] > {
	I0422 17:52:01.350927   48612 command_runner.go:130] >   "images": [
	I0422 17:52:01.350931   48612 command_runner.go:130] >     {
	I0422 17:52:01.350938   48612 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 17:52:01.350943   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.350948   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 17:52:01.350952   48612 command_runner.go:130] >       ],
	I0422 17:52:01.350956   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.350964   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 17:52:01.350970   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 17:52:01.350974   48612 command_runner.go:130] >       ],
	I0422 17:52:01.350979   48612 command_runner.go:130] >       "size": "65291810",
	I0422 17:52:01.350983   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.350994   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351009   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351018   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351024   48612 command_runner.go:130] >     },
	I0422 17:52:01.351029   48612 command_runner.go:130] >     {
	I0422 17:52:01.351039   48612 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 17:52:01.351056   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351064   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 17:52:01.351068   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351073   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351080   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 17:52:01.351094   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 17:52:01.351100   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351104   48612 command_runner.go:130] >       "size": "1363676",
	I0422 17:52:01.351108   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.351116   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351141   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351151   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351156   48612 command_runner.go:130] >     },
	I0422 17:52:01.351162   48612 command_runner.go:130] >     {
	I0422 17:52:01.351174   48612 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 17:52:01.351184   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351191   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 17:52:01.351198   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351202   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351209   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 17:52:01.351219   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 17:52:01.351225   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351229   48612 command_runner.go:130] >       "size": "31470524",
	I0422 17:52:01.351233   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.351241   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351247   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351257   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351263   48612 command_runner.go:130] >     },
	I0422 17:52:01.351268   48612 command_runner.go:130] >     {
	I0422 17:52:01.351282   48612 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 17:52:01.351292   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351303   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 17:52:01.351307   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351311   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351321   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 17:52:01.351339   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 17:52:01.351348   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351355   48612 command_runner.go:130] >       "size": "61245718",
	I0422 17:52:01.351365   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.351372   48612 command_runner.go:130] >       "username": "nonroot",
	I0422 17:52:01.351385   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351402   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351410   48612 command_runner.go:130] >     },
	I0422 17:52:01.351415   48612 command_runner.go:130] >     {
	I0422 17:52:01.351427   48612 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 17:52:01.351434   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351438   48612 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 17:52:01.351445   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351451   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351472   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 17:52:01.351490   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 17:52:01.351495   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351503   48612 command_runner.go:130] >       "size": "150779692",
	I0422 17:52:01.351513   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.351523   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.351531   48612 command_runner.go:130] >       },
	I0422 17:52:01.351541   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351551   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351561   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351568   48612 command_runner.go:130] >     },
	I0422 17:52:01.351575   48612 command_runner.go:130] >     {
	I0422 17:52:01.351585   48612 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 17:52:01.351595   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351607   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 17:52:01.351616   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351625   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351641   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 17:52:01.351655   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 17:52:01.351664   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351671   48612 command_runner.go:130] >       "size": "117609952",
	I0422 17:52:01.351675   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.351685   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.351694   48612 command_runner.go:130] >       },
	I0422 17:52:01.351702   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351712   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351721   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351727   48612 command_runner.go:130] >     },
	I0422 17:52:01.351744   48612 command_runner.go:130] >     {
	I0422 17:52:01.351757   48612 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 17:52:01.351765   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351771   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 17:52:01.351779   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351785   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351800   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 17:52:01.351817   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 17:52:01.351828   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351839   48612 command_runner.go:130] >       "size": "112170310",
	I0422 17:52:01.351846   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.351854   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.351862   48612 command_runner.go:130] >       },
	I0422 17:52:01.351866   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351873   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351880   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351889   48612 command_runner.go:130] >     },
	I0422 17:52:01.351894   48612 command_runner.go:130] >     {
	I0422 17:52:01.351905   48612 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 17:52:01.351914   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351922   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 17:52:01.351931   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351938   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351965   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 17:52:01.351980   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 17:52:01.351989   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352001   48612 command_runner.go:130] >       "size": "85932953",
	I0422 17:52:01.352011   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.352020   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.352030   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.352039   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.352047   48612 command_runner.go:130] >     },
	I0422 17:52:01.352056   48612 command_runner.go:130] >     {
	I0422 17:52:01.352064   48612 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 17:52:01.352073   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.352081   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 17:52:01.352097   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352110   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.352120   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 17:52:01.352131   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 17:52:01.352137   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352144   48612 command_runner.go:130] >       "size": "63026502",
	I0422 17:52:01.352150   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.352158   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.352164   48612 command_runner.go:130] >       },
	I0422 17:52:01.352175   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.352181   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.352189   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.352195   48612 command_runner.go:130] >     },
	I0422 17:52:01.352204   48612 command_runner.go:130] >     {
	I0422 17:52:01.352215   48612 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 17:52:01.352224   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.352231   48612 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 17:52:01.352240   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352247   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.352261   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 17:52:01.352278   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 17:52:01.352287   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352294   48612 command_runner.go:130] >       "size": "750414",
	I0422 17:52:01.352303   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.352310   48612 command_runner.go:130] >         "value": "65535"
	I0422 17:52:01.352319   48612 command_runner.go:130] >       },
	I0422 17:52:01.352326   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.352336   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.352342   48612 command_runner.go:130] >       "pinned": true
	I0422 17:52:01.352351   48612 command_runner.go:130] >     }
	I0422 17:52:01.352356   48612 command_runner.go:130] >   ]
	I0422 17:52:01.352364   48612 command_runner.go:130] > }
	I0422 17:52:01.352516   48612 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:52:01.352529   48612 cache_images.go:84] Images are preloaded, skipping loading
	I0422 17:52:01.352536   48612 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.30.0 crio true true} ...
	I0422 17:52:01.352679   48612 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-704531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:52:01.352766   48612 ssh_runner.go:195] Run: crio config
	I0422 17:52:01.386076   48612 command_runner.go:130] ! time="2024-04-22 17:52:01.367649942Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0422 17:52:01.391722   48612 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0422 17:52:01.398066   48612 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0422 17:52:01.398099   48612 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0422 17:52:01.398106   48612 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0422 17:52:01.398110   48612 command_runner.go:130] > #
	I0422 17:52:01.398118   48612 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0422 17:52:01.398128   48612 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0422 17:52:01.398138   48612 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0422 17:52:01.398154   48612 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0422 17:52:01.398163   48612 command_runner.go:130] > # reload'.
	I0422 17:52:01.398170   48612 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0422 17:52:01.398179   48612 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0422 17:52:01.398186   48612 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0422 17:52:01.398193   48612 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0422 17:52:01.398199   48612 command_runner.go:130] > [crio]
	I0422 17:52:01.398209   48612 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0422 17:52:01.398216   48612 command_runner.go:130] > # containers images, in this directory.
	I0422 17:52:01.398227   48612 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0422 17:52:01.398244   48612 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0422 17:52:01.398254   48612 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0422 17:52:01.398270   48612 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0422 17:52:01.398275   48612 command_runner.go:130] > # imagestore = ""
	I0422 17:52:01.398283   48612 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0422 17:52:01.398289   48612 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0422 17:52:01.398297   48612 command_runner.go:130] > storage_driver = "overlay"
	I0422 17:52:01.398307   48612 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0422 17:52:01.398320   48612 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0422 17:52:01.398329   48612 command_runner.go:130] > storage_option = [
	I0422 17:52:01.398337   48612 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0422 17:52:01.398345   48612 command_runner.go:130] > ]
	I0422 17:52:01.398355   48612 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0422 17:52:01.398368   48612 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0422 17:52:01.398375   48612 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0422 17:52:01.398383   48612 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0422 17:52:01.398404   48612 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0422 17:52:01.398415   48612 command_runner.go:130] > # always happen on a node reboot
	I0422 17:52:01.398426   48612 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0422 17:52:01.398444   48612 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0422 17:52:01.398457   48612 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0422 17:52:01.398466   48612 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0422 17:52:01.398473   48612 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0422 17:52:01.398489   48612 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0422 17:52:01.398505   48612 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0422 17:52:01.398515   48612 command_runner.go:130] > # internal_wipe = true
	I0422 17:52:01.398531   48612 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0422 17:52:01.398543   48612 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0422 17:52:01.398550   48612 command_runner.go:130] > # internal_repair = false
	I0422 17:52:01.398556   48612 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0422 17:52:01.398569   48612 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0422 17:52:01.398581   48612 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0422 17:52:01.398593   48612 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0422 17:52:01.398608   48612 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0422 17:52:01.398616   48612 command_runner.go:130] > [crio.api]
	I0422 17:52:01.398628   48612 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0422 17:52:01.398636   48612 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0422 17:52:01.398645   48612 command_runner.go:130] > # IP address on which the stream server will listen.
	I0422 17:52:01.398656   48612 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0422 17:52:01.398670   48612 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0422 17:52:01.398681   48612 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0422 17:52:01.398690   48612 command_runner.go:130] > # stream_port = "0"
	I0422 17:52:01.398702   48612 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0422 17:52:01.398712   48612 command_runner.go:130] > # stream_enable_tls = false
	I0422 17:52:01.398722   48612 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0422 17:52:01.398730   48612 command_runner.go:130] > # stream_idle_timeout = ""
	I0422 17:52:01.398744   48612 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0422 17:52:01.398757   48612 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0422 17:52:01.398765   48612 command_runner.go:130] > # minutes.
	I0422 17:52:01.398774   48612 command_runner.go:130] > # stream_tls_cert = ""
	I0422 17:52:01.398786   48612 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0422 17:52:01.398798   48612 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0422 17:52:01.398811   48612 command_runner.go:130] > # stream_tls_key = ""
	I0422 17:52:01.398824   48612 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0422 17:52:01.398838   48612 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0422 17:52:01.398869   48612 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0422 17:52:01.398879   48612 command_runner.go:130] > # stream_tls_ca = ""
	I0422 17:52:01.398892   48612 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 17:52:01.398899   48612 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0422 17:52:01.398910   48612 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 17:52:01.398923   48612 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0422 17:52:01.398937   48612 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0422 17:52:01.398949   48612 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0422 17:52:01.398958   48612 command_runner.go:130] > [crio.runtime]
	I0422 17:52:01.398975   48612 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0422 17:52:01.398983   48612 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0422 17:52:01.398988   48612 command_runner.go:130] > # "nofile=1024:2048"
	I0422 17:52:01.399002   48612 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0422 17:52:01.399012   48612 command_runner.go:130] > # default_ulimits = [
	I0422 17:52:01.399020   48612 command_runner.go:130] > # ]
	I0422 17:52:01.399031   48612 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0422 17:52:01.399041   48612 command_runner.go:130] > # no_pivot = false
	I0422 17:52:01.399056   48612 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0422 17:52:01.399065   48612 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0422 17:52:01.399075   48612 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0422 17:52:01.399086   48612 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0422 17:52:01.399098   48612 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0422 17:52:01.399112   48612 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 17:52:01.399132   48612 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0422 17:52:01.399144   48612 command_runner.go:130] > # Cgroup setting for conmon
	I0422 17:52:01.399155   48612 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0422 17:52:01.399163   48612 command_runner.go:130] > conmon_cgroup = "pod"
	I0422 17:52:01.399177   48612 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0422 17:52:01.399188   48612 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0422 17:52:01.399197   48612 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 17:52:01.399206   48612 command_runner.go:130] > conmon_env = [
	I0422 17:52:01.399219   48612 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 17:52:01.399228   48612 command_runner.go:130] > ]
	I0422 17:52:01.399246   48612 command_runner.go:130] > # Additional environment variables to set for all the
	I0422 17:52:01.399257   48612 command_runner.go:130] > # containers. These are overridden if set in the
	I0422 17:52:01.399265   48612 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0422 17:52:01.399274   48612 command_runner.go:130] > # default_env = [
	I0422 17:52:01.399279   48612 command_runner.go:130] > # ]
	I0422 17:52:01.399288   48612 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0422 17:52:01.399307   48612 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0422 17:52:01.399316   48612 command_runner.go:130] > # selinux = false
	I0422 17:52:01.399326   48612 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0422 17:52:01.399340   48612 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0422 17:52:01.399353   48612 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0422 17:52:01.399362   48612 command_runner.go:130] > # seccomp_profile = ""
	I0422 17:52:01.399374   48612 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0422 17:52:01.399385   48612 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0422 17:52:01.399398   48612 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0422 17:52:01.399409   48612 command_runner.go:130] > # which might increase security.
	I0422 17:52:01.399415   48612 command_runner.go:130] > # This option is currently deprecated,
	I0422 17:52:01.399428   48612 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0422 17:52:01.399438   48612 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0422 17:52:01.399451   48612 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0422 17:52:01.399461   48612 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0422 17:52:01.399474   48612 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0422 17:52:01.399488   48612 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0422 17:52:01.399500   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.399510   48612 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0422 17:52:01.399521   48612 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0422 17:52:01.399531   48612 command_runner.go:130] > # the cgroup blockio controller.
	I0422 17:52:01.399540   48612 command_runner.go:130] > # blockio_config_file = ""
	I0422 17:52:01.399551   48612 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0422 17:52:01.399560   48612 command_runner.go:130] > # blockio parameters.
	I0422 17:52:01.399570   48612 command_runner.go:130] > # blockio_reload = false
	I0422 17:52:01.399584   48612 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0422 17:52:01.399594   48612 command_runner.go:130] > # irqbalance daemon.
	I0422 17:52:01.399605   48612 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0422 17:52:01.399617   48612 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0422 17:52:01.399629   48612 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0422 17:52:01.399645   48612 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0422 17:52:01.399658   48612 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0422 17:52:01.399672   48612 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0422 17:52:01.399683   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.399692   48612 command_runner.go:130] > # rdt_config_file = ""
	I0422 17:52:01.399703   48612 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0422 17:52:01.399713   48612 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0422 17:52:01.399755   48612 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0422 17:52:01.399767   48612 command_runner.go:130] > # separate_pull_cgroup = ""
	I0422 17:52:01.399778   48612 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0422 17:52:01.399788   48612 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0422 17:52:01.399799   48612 command_runner.go:130] > # will be added.
	I0422 17:52:01.399807   48612 command_runner.go:130] > # default_capabilities = [
	I0422 17:52:01.399811   48612 command_runner.go:130] > # 	"CHOWN",
	I0422 17:52:01.399817   48612 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0422 17:52:01.399827   48612 command_runner.go:130] > # 	"FSETID",
	I0422 17:52:01.399836   48612 command_runner.go:130] > # 	"FOWNER",
	I0422 17:52:01.399845   48612 command_runner.go:130] > # 	"SETGID",
	I0422 17:52:01.399851   48612 command_runner.go:130] > # 	"SETUID",
	I0422 17:52:01.399855   48612 command_runner.go:130] > # 	"SETPCAP",
	I0422 17:52:01.399865   48612 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0422 17:52:01.399873   48612 command_runner.go:130] > # 	"KILL",
	I0422 17:52:01.399882   48612 command_runner.go:130] > # ]
	I0422 17:52:01.399891   48612 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0422 17:52:01.399901   48612 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0422 17:52:01.399916   48612 command_runner.go:130] > # add_inheritable_capabilities = false
	I0422 17:52:01.399929   48612 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0422 17:52:01.399942   48612 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 17:52:01.399951   48612 command_runner.go:130] > default_sysctls = [
	I0422 17:52:01.399962   48612 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0422 17:52:01.399974   48612 command_runner.go:130] > ]
	I0422 17:52:01.399981   48612 command_runner.go:130] > # List of devices on the host that a
	I0422 17:52:01.399991   48612 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0422 17:52:01.400001   48612 command_runner.go:130] > # allowed_devices = [
	I0422 17:52:01.400010   48612 command_runner.go:130] > # 	"/dev/fuse",
	I0422 17:52:01.400018   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400035   48612 command_runner.go:130] > # List of additional devices. specified as
	I0422 17:52:01.400050   48612 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0422 17:52:01.400060   48612 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0422 17:52:01.400068   48612 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 17:52:01.400077   48612 command_runner.go:130] > # additional_devices = [
	I0422 17:52:01.400087   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400099   48612 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0422 17:52:01.400106   48612 command_runner.go:130] > # cdi_spec_dirs = [
	I0422 17:52:01.400114   48612 command_runner.go:130] > # 	"/etc/cdi",
	I0422 17:52:01.400123   48612 command_runner.go:130] > # 	"/var/run/cdi",
	I0422 17:52:01.400131   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400141   48612 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0422 17:52:01.400151   48612 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0422 17:52:01.400158   48612 command_runner.go:130] > # Defaults to false.
	I0422 17:52:01.400170   48612 command_runner.go:130] > # device_ownership_from_security_context = false
	I0422 17:52:01.400183   48612 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0422 17:52:01.400196   48612 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0422 17:52:01.400205   48612 command_runner.go:130] > # hooks_dir = [
	I0422 17:52:01.400215   48612 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0422 17:52:01.400223   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400232   48612 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0422 17:52:01.400241   48612 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0422 17:52:01.400248   48612 command_runner.go:130] > # its default mounts from the following two files:
	I0422 17:52:01.400257   48612 command_runner.go:130] > #
	I0422 17:52:01.400268   48612 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0422 17:52:01.400281   48612 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0422 17:52:01.400290   48612 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0422 17:52:01.400298   48612 command_runner.go:130] > #
	I0422 17:52:01.400308   48612 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0422 17:52:01.400320   48612 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0422 17:52:01.400330   48612 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0422 17:52:01.400341   48612 command_runner.go:130] > #      only add mounts it finds in this file.
	I0422 17:52:01.400349   48612 command_runner.go:130] > #
	I0422 17:52:01.400356   48612 command_runner.go:130] > # default_mounts_file = ""
	I0422 17:52:01.400367   48612 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0422 17:52:01.400377   48612 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0422 17:52:01.400482   48612 command_runner.go:130] > pids_limit = 1024
	I0422 17:52:01.400506   48612 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0422 17:52:01.400516   48612 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0422 17:52:01.400527   48612 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0422 17:52:01.400543   48612 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0422 17:52:01.400554   48612 command_runner.go:130] > # log_size_max = -1
	I0422 17:52:01.400571   48612 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0422 17:52:01.400581   48612 command_runner.go:130] > # log_to_journald = false
	I0422 17:52:01.400599   48612 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0422 17:52:01.400608   48612 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0422 17:52:01.400617   48612 command_runner.go:130] > # Path to directory for container attach sockets.
	I0422 17:52:01.400628   48612 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0422 17:52:01.400640   48612 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0422 17:52:01.400650   48612 command_runner.go:130] > # bind_mount_prefix = ""
	I0422 17:52:01.400662   48612 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0422 17:52:01.400671   48612 command_runner.go:130] > # read_only = false
	I0422 17:52:01.400686   48612 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0422 17:52:01.400760   48612 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0422 17:52:01.400783   48612 command_runner.go:130] > # live configuration reload.
	I0422 17:52:01.400793   48612 command_runner.go:130] > # log_level = "info"
	I0422 17:52:01.400805   48612 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0422 17:52:01.400816   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.400825   48612 command_runner.go:130] > # log_filter = ""
	I0422 17:52:01.400838   48612 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0422 17:52:01.400854   48612 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0422 17:52:01.400862   48612 command_runner.go:130] > # separated by comma.
	I0422 17:52:01.400883   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.400892   48612 command_runner.go:130] > # uid_mappings = ""
	I0422 17:52:01.400898   48612 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0422 17:52:01.400912   48612 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0422 17:52:01.400926   48612 command_runner.go:130] > # separated by comma.
	I0422 17:52:01.400941   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.400958   48612 command_runner.go:130] > # gid_mappings = ""
	I0422 17:52:01.400971   48612 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0422 17:52:01.400981   48612 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 17:52:01.400989   48612 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 17:52:01.401014   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.401025   48612 command_runner.go:130] > # minimum_mappable_uid = -1
	I0422 17:52:01.401038   48612 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0422 17:52:01.401051   48612 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 17:52:01.401062   48612 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 17:52:01.401074   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.401084   48612 command_runner.go:130] > # minimum_mappable_gid = -1
	I0422 17:52:01.401097   48612 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0422 17:52:01.401110   48612 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0422 17:52:01.401122   48612 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0422 17:52:01.401132   48612 command_runner.go:130] > # ctr_stop_timeout = 30
	I0422 17:52:01.401144   48612 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0422 17:52:01.401153   48612 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0422 17:52:01.401160   48612 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0422 17:52:01.401172   48612 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0422 17:52:01.401181   48612 command_runner.go:130] > drop_infra_ctr = false
	I0422 17:52:01.401194   48612 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0422 17:52:01.401207   48612 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0422 17:52:01.401221   48612 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0422 17:52:01.401230   48612 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0422 17:52:01.401237   48612 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0422 17:52:01.401249   48612 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0422 17:52:01.401259   48612 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0422 17:52:01.401270   48612 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0422 17:52:01.401280   48612 command_runner.go:130] > # shared_cpuset = ""
	I0422 17:52:01.401290   48612 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0422 17:52:01.401301   48612 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0422 17:52:01.401308   48612 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0422 17:52:01.401320   48612 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0422 17:52:01.401327   48612 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0422 17:52:01.401336   48612 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0422 17:52:01.401353   48612 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0422 17:52:01.401364   48612 command_runner.go:130] > # enable_criu_support = false
	I0422 17:52:01.401375   48612 command_runner.go:130] > # Enable/disable the generation of the container,
	I0422 17:52:01.401387   48612 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0422 17:52:01.401397   48612 command_runner.go:130] > # enable_pod_events = false
	I0422 17:52:01.401412   48612 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 17:52:01.401425   48612 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 17:52:01.401438   48612 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0422 17:52:01.401448   48612 command_runner.go:130] > # default_runtime = "runc"
	I0422 17:52:01.401459   48612 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0422 17:52:01.401471   48612 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0422 17:52:01.401488   48612 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0422 17:52:01.401496   48612 command_runner.go:130] > # creation as a file is not desired either.
	I0422 17:52:01.401508   48612 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0422 17:52:01.401519   48612 command_runner.go:130] > # the hostname is being managed dynamically.
	I0422 17:52:01.401530   48612 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0422 17:52:01.401539   48612 command_runner.go:130] > # ]
	I0422 17:52:01.401551   48612 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0422 17:52:01.401564   48612 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0422 17:52:01.401575   48612 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0422 17:52:01.401583   48612 command_runner.go:130] > # Each entry in the table should follow the format:
	I0422 17:52:01.401589   48612 command_runner.go:130] > #
	I0422 17:52:01.401600   48612 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0422 17:52:01.401616   48612 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0422 17:52:01.401677   48612 command_runner.go:130] > # runtime_type = "oci"
	I0422 17:52:01.401688   48612 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0422 17:52:01.401701   48612 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0422 17:52:01.401710   48612 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0422 17:52:01.401720   48612 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0422 17:52:01.401729   48612 command_runner.go:130] > # monitor_env = []
	I0422 17:52:01.401740   48612 command_runner.go:130] > # privileged_without_host_devices = false
	I0422 17:52:01.401749   48612 command_runner.go:130] > # allowed_annotations = []
	I0422 17:52:01.401757   48612 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0422 17:52:01.401765   48612 command_runner.go:130] > # Where:
	I0422 17:52:01.401777   48612 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0422 17:52:01.401791   48612 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0422 17:52:01.401804   48612 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0422 17:52:01.401817   48612 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0422 17:52:01.401828   48612 command_runner.go:130] > #   in $PATH.
	I0422 17:52:01.401838   48612 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0422 17:52:01.401847   48612 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0422 17:52:01.401864   48612 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0422 17:52:01.401874   48612 command_runner.go:130] > #   state.
	I0422 17:52:01.401887   48612 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0422 17:52:01.401898   48612 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0422 17:52:01.401911   48612 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0422 17:52:01.401924   48612 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0422 17:52:01.401935   48612 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0422 17:52:01.401948   48612 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0422 17:52:01.401959   48612 command_runner.go:130] > #   The currently recognized values are:
	I0422 17:52:01.401973   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0422 17:52:01.401987   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0422 17:52:01.401998   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0422 17:52:01.402008   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0422 17:52:01.402020   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0422 17:52:01.402034   48612 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0422 17:52:01.402048   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0422 17:52:01.402073   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0422 17:52:01.402085   48612 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0422 17:52:01.402096   48612 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0422 17:52:01.402103   48612 command_runner.go:130] > #   deprecated option "conmon".
	I0422 17:52:01.402118   48612 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0422 17:52:01.402129   48612 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0422 17:52:01.402144   48612 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0422 17:52:01.402155   48612 command_runner.go:130] > #   should be moved to the container's cgroup
	I0422 17:52:01.402168   48612 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0422 17:52:01.402178   48612 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0422 17:52:01.402187   48612 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0422 17:52:01.402198   48612 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0422 17:52:01.402207   48612 command_runner.go:130] > #
	I0422 17:52:01.402217   48612 command_runner.go:130] > # Using the seccomp notifier feature:
	I0422 17:52:01.402226   48612 command_runner.go:130] > #
	I0422 17:52:01.402238   48612 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0422 17:52:01.402249   48612 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0422 17:52:01.402257   48612 command_runner.go:130] > #
	I0422 17:52:01.402265   48612 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0422 17:52:01.402273   48612 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0422 17:52:01.402286   48612 command_runner.go:130] > #
	I0422 17:52:01.402301   48612 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0422 17:52:01.402310   48612 command_runner.go:130] > # feature.
	I0422 17:52:01.402315   48612 command_runner.go:130] > #
	I0422 17:52:01.402328   48612 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0422 17:52:01.402341   48612 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0422 17:52:01.402357   48612 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0422 17:52:01.402366   48612 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0422 17:52:01.402374   48612 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0422 17:52:01.402379   48612 command_runner.go:130] > #
	I0422 17:52:01.402389   48612 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0422 17:52:01.402402   48612 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0422 17:52:01.402416   48612 command_runner.go:130] > #
	I0422 17:52:01.402432   48612 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0422 17:52:01.402444   48612 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0422 17:52:01.402452   48612 command_runner.go:130] > #
	I0422 17:52:01.402461   48612 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0422 17:52:01.402471   48612 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0422 17:52:01.402477   48612 command_runner.go:130] > # limitation.
	I0422 17:52:01.402482   48612 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0422 17:52:01.402488   48612 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0422 17:52:01.402492   48612 command_runner.go:130] > runtime_type = "oci"
	I0422 17:52:01.402498   48612 command_runner.go:130] > runtime_root = "/run/runc"
	I0422 17:52:01.402503   48612 command_runner.go:130] > runtime_config_path = ""
	I0422 17:52:01.402509   48612 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0422 17:52:01.402514   48612 command_runner.go:130] > monitor_cgroup = "pod"
	I0422 17:52:01.402520   48612 command_runner.go:130] > monitor_exec_cgroup = ""
	I0422 17:52:01.402524   48612 command_runner.go:130] > monitor_env = [
	I0422 17:52:01.402536   48612 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 17:52:01.402544   48612 command_runner.go:130] > ]
	I0422 17:52:01.402553   48612 command_runner.go:130] > privileged_without_host_devices = false
	I0422 17:52:01.402566   48612 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0422 17:52:01.402578   48612 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0422 17:52:01.402590   48612 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0422 17:52:01.402604   48612 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0422 17:52:01.402616   48612 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0422 17:52:01.402629   48612 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0422 17:52:01.402640   48612 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0422 17:52:01.402649   48612 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0422 17:52:01.402657   48612 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0422 17:52:01.402664   48612 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0422 17:52:01.402670   48612 command_runner.go:130] > # Example:
	I0422 17:52:01.402675   48612 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0422 17:52:01.402682   48612 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0422 17:52:01.402687   48612 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0422 17:52:01.402693   48612 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0422 17:52:01.402696   48612 command_runner.go:130] > # cpuset = 0
	I0422 17:52:01.402703   48612 command_runner.go:130] > # cpushares = "0-1"
	I0422 17:52:01.402706   48612 command_runner.go:130] > # Where:
	I0422 17:52:01.402713   48612 command_runner.go:130] > # The workload name is workload-type.
	I0422 17:52:01.402720   48612 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0422 17:52:01.402727   48612 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0422 17:52:01.402733   48612 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0422 17:52:01.402744   48612 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0422 17:52:01.402753   48612 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0422 17:52:01.402762   48612 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0422 17:52:01.402777   48612 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0422 17:52:01.402785   48612 command_runner.go:130] > # Default value is set to true
	I0422 17:52:01.402790   48612 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0422 17:52:01.402797   48612 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0422 17:52:01.402804   48612 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0422 17:52:01.402808   48612 command_runner.go:130] > # Default value is set to 'false'
	I0422 17:52:01.402815   48612 command_runner.go:130] > # disable_hostport_mapping = false
	I0422 17:52:01.402821   48612 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0422 17:52:01.402825   48612 command_runner.go:130] > #
	I0422 17:52:01.402830   48612 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0422 17:52:01.402836   48612 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0422 17:52:01.402841   48612 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0422 17:52:01.402847   48612 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0422 17:52:01.402855   48612 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0422 17:52:01.402858   48612 command_runner.go:130] > [crio.image]
	I0422 17:52:01.402863   48612 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0422 17:52:01.402872   48612 command_runner.go:130] > # default_transport = "docker://"
	I0422 17:52:01.402877   48612 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0422 17:52:01.402883   48612 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0422 17:52:01.402886   48612 command_runner.go:130] > # global_auth_file = ""
	I0422 17:52:01.402891   48612 command_runner.go:130] > # The image used to instantiate infra containers.
	I0422 17:52:01.402895   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.402899   48612 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0422 17:52:01.402905   48612 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0422 17:52:01.402910   48612 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0422 17:52:01.402918   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.402922   48612 command_runner.go:130] > # pause_image_auth_file = ""
	I0422 17:52:01.402927   48612 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0422 17:52:01.402932   48612 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0422 17:52:01.402937   48612 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0422 17:52:01.402943   48612 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0422 17:52:01.402947   48612 command_runner.go:130] > # pause_command = "/pause"
	I0422 17:52:01.402952   48612 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0422 17:52:01.402957   48612 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0422 17:52:01.402963   48612 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0422 17:52:01.402969   48612 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0422 17:52:01.402975   48612 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0422 17:52:01.402980   48612 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0422 17:52:01.402983   48612 command_runner.go:130] > # pinned_images = [
	I0422 17:52:01.402986   48612 command_runner.go:130] > # ]
	I0422 17:52:01.402992   48612 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0422 17:52:01.402997   48612 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0422 17:52:01.403003   48612 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0422 17:52:01.403012   48612 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0422 17:52:01.403017   48612 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0422 17:52:01.403024   48612 command_runner.go:130] > # signature_policy = ""
	I0422 17:52:01.403029   48612 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0422 17:52:01.403038   48612 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0422 17:52:01.403046   48612 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0422 17:52:01.403056   48612 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0422 17:52:01.403062   48612 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0422 17:52:01.403069   48612 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0422 17:52:01.403079   48612 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0422 17:52:01.403087   48612 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0422 17:52:01.403093   48612 command_runner.go:130] > # changing them here.
	I0422 17:52:01.403097   48612 command_runner.go:130] > # insecure_registries = [
	I0422 17:52:01.403103   48612 command_runner.go:130] > # ]
	I0422 17:52:01.403109   48612 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0422 17:52:01.403116   48612 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0422 17:52:01.403150   48612 command_runner.go:130] > # image_volumes = "mkdir"
	I0422 17:52:01.403161   48612 command_runner.go:130] > # Temporary directory to use for storing big files
	I0422 17:52:01.403166   48612 command_runner.go:130] > # big_files_temporary_dir = ""
	I0422 17:52:01.403174   48612 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0422 17:52:01.403180   48612 command_runner.go:130] > # CNI plugins.
	I0422 17:52:01.403184   48612 command_runner.go:130] > [crio.network]
	I0422 17:52:01.403192   48612 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0422 17:52:01.403200   48612 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0422 17:52:01.403205   48612 command_runner.go:130] > # cni_default_network = ""
	I0422 17:52:01.403210   48612 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0422 17:52:01.403218   48612 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0422 17:52:01.403223   48612 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0422 17:52:01.403229   48612 command_runner.go:130] > # plugin_dirs = [
	I0422 17:52:01.403233   48612 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0422 17:52:01.403239   48612 command_runner.go:130] > # ]
	I0422 17:52:01.403244   48612 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0422 17:52:01.403250   48612 command_runner.go:130] > [crio.metrics]
	I0422 17:52:01.403255   48612 command_runner.go:130] > # Globally enable or disable metrics support.
	I0422 17:52:01.403261   48612 command_runner.go:130] > enable_metrics = true
	I0422 17:52:01.403266   48612 command_runner.go:130] > # Specify enabled metrics collectors.
	I0422 17:52:01.403272   48612 command_runner.go:130] > # Per default all metrics are enabled.
	I0422 17:52:01.403278   48612 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0422 17:52:01.403287   48612 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0422 17:52:01.403295   48612 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0422 17:52:01.403300   48612 command_runner.go:130] > # metrics_collectors = [
	I0422 17:52:01.403304   48612 command_runner.go:130] > # 	"operations",
	I0422 17:52:01.403311   48612 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0422 17:52:01.403316   48612 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0422 17:52:01.403322   48612 command_runner.go:130] > # 	"operations_errors",
	I0422 17:52:01.403333   48612 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0422 17:52:01.403339   48612 command_runner.go:130] > # 	"image_pulls_by_name",
	I0422 17:52:01.403344   48612 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0422 17:52:01.403349   48612 command_runner.go:130] > # 	"image_pulls_failures",
	I0422 17:52:01.403356   48612 command_runner.go:130] > # 	"image_pulls_successes",
	I0422 17:52:01.403360   48612 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0422 17:52:01.403367   48612 command_runner.go:130] > # 	"image_layer_reuse",
	I0422 17:52:01.403371   48612 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0422 17:52:01.403377   48612 command_runner.go:130] > # 	"containers_oom_total",
	I0422 17:52:01.403381   48612 command_runner.go:130] > # 	"containers_oom",
	I0422 17:52:01.403387   48612 command_runner.go:130] > # 	"processes_defunct",
	I0422 17:52:01.403390   48612 command_runner.go:130] > # 	"operations_total",
	I0422 17:52:01.403397   48612 command_runner.go:130] > # 	"operations_latency_seconds",
	I0422 17:52:01.403401   48612 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0422 17:52:01.403408   48612 command_runner.go:130] > # 	"operations_errors_total",
	I0422 17:52:01.403412   48612 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0422 17:52:01.403422   48612 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0422 17:52:01.403429   48612 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0422 17:52:01.403434   48612 command_runner.go:130] > # 	"image_pulls_success_total",
	I0422 17:52:01.403440   48612 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0422 17:52:01.403445   48612 command_runner.go:130] > # 	"containers_oom_count_total",
	I0422 17:52:01.403451   48612 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0422 17:52:01.403456   48612 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0422 17:52:01.403461   48612 command_runner.go:130] > # ]
	I0422 17:52:01.403466   48612 command_runner.go:130] > # The port on which the metrics server will listen.
	I0422 17:52:01.403472   48612 command_runner.go:130] > # metrics_port = 9090
	I0422 17:52:01.403477   48612 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0422 17:52:01.403484   48612 command_runner.go:130] > # metrics_socket = ""
	I0422 17:52:01.403489   48612 command_runner.go:130] > # The certificate for the secure metrics server.
	I0422 17:52:01.403497   48612 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0422 17:52:01.403505   48612 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0422 17:52:01.403510   48612 command_runner.go:130] > # certificate on any modification event.
	I0422 17:52:01.403516   48612 command_runner.go:130] > # metrics_cert = ""
	I0422 17:52:01.403521   48612 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0422 17:52:01.403527   48612 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0422 17:52:01.403531   48612 command_runner.go:130] > # metrics_key = ""
	I0422 17:52:01.403541   48612 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0422 17:52:01.403548   48612 command_runner.go:130] > [crio.tracing]
	I0422 17:52:01.403553   48612 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0422 17:52:01.403560   48612 command_runner.go:130] > # enable_tracing = false
	I0422 17:52:01.403565   48612 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0422 17:52:01.403571   48612 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0422 17:52:01.403578   48612 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0422 17:52:01.403585   48612 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0422 17:52:01.403589   48612 command_runner.go:130] > # CRI-O NRI configuration.
	I0422 17:52:01.403595   48612 command_runner.go:130] > [crio.nri]
	I0422 17:52:01.403600   48612 command_runner.go:130] > # Globally enable or disable NRI.
	I0422 17:52:01.403606   48612 command_runner.go:130] > # enable_nri = false
	I0422 17:52:01.403612   48612 command_runner.go:130] > # NRI socket to listen on.
	I0422 17:52:01.403619   48612 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0422 17:52:01.403624   48612 command_runner.go:130] > # NRI plugin directory to use.
	I0422 17:52:01.403630   48612 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0422 17:52:01.403635   48612 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0422 17:52:01.403642   48612 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0422 17:52:01.403647   48612 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0422 17:52:01.403654   48612 command_runner.go:130] > # nri_disable_connections = false
	I0422 17:52:01.403659   48612 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0422 17:52:01.403666   48612 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0422 17:52:01.403671   48612 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0422 17:52:01.403678   48612 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0422 17:52:01.403684   48612 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0422 17:52:01.403689   48612 command_runner.go:130] > [crio.stats]
	I0422 17:52:01.403695   48612 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0422 17:52:01.403702   48612 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0422 17:52:01.403706   48612 command_runner.go:130] > # stats_collection_period = 0
	I0422 17:52:01.403855   48612 cni.go:84] Creating CNI manager for ""
	I0422 17:52:01.403872   48612 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 17:52:01.403885   48612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 17:52:01.403905   48612 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-704531 NodeName:multinode-704531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 17:52:01.404045   48612 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-704531"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 17:52:01.404104   48612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:52:01.414740   48612 command_runner.go:130] > kubeadm
	I0422 17:52:01.414774   48612 command_runner.go:130] > kubectl
	I0422 17:52:01.414778   48612 command_runner.go:130] > kubelet
	I0422 17:52:01.414808   48612 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 17:52:01.414863   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 17:52:01.426012   48612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0422 17:52:01.444484   48612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:52:01.462231   48612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0422 17:52:01.480152   48612 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I0422 17:52:01.484240   48612 command_runner.go:130] > 192.168.39.41	control-plane.minikube.internal
	I0422 17:52:01.484468   48612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:52:01.621815   48612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:52:01.636511   48612 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531 for IP: 192.168.39.41
	I0422 17:52:01.636536   48612 certs.go:194] generating shared ca certs ...
	I0422 17:52:01.636551   48612 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:52:01.636714   48612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:52:01.636754   48612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:52:01.636764   48612 certs.go:256] generating profile certs ...
	I0422 17:52:01.636837   48612 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/client.key
	I0422 17:52:01.636903   48612 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.key.5a12d634
	I0422 17:52:01.636943   48612 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.key
	I0422 17:52:01.636954   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:52:01.636974   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:52:01.636986   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:52:01.636998   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:52:01.637007   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:52:01.637020   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:52:01.637032   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:52:01.637043   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:52:01.637090   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:52:01.637120   48612 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:52:01.637130   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:52:01.637156   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:52:01.637179   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:52:01.637199   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:52:01.637231   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:52:01.637260   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:52:01.637273   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:01.637285   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:52:01.637843   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:52:01.663697   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:52:01.688909   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:52:01.713919   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:52:01.739491   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 17:52:01.764219   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 17:52:01.815597   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:52:01.888912   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:52:01.928613   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:52:01.973988   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:52:02.003832   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:52:02.043020   48612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 17:52:02.060436   48612 ssh_runner.go:195] Run: openssl version
	I0422 17:52:02.073718   48612 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0422 17:52:02.073810   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:52:02.091779   48612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.101183   48612 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.101217   48612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.101262   48612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.108883   48612 command_runner.go:130] > b5213941
	I0422 17:52:02.109205   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:52:02.129187   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:52:02.151362   48612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.157338   48612 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.157461   48612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.157522   48612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.163464   48612 command_runner.go:130] > 51391683
	I0422 17:52:02.163745   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:52:02.173642   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:52:02.185279   48612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.189972   48612 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.190067   48612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.190128   48612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.196137   48612 command_runner.go:130] > 3ec20f2e
	I0422 17:52:02.196205   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:52:02.208424   48612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:52:02.213152   48612 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:52:02.213174   48612 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0422 17:52:02.213179   48612 command_runner.go:130] > Device: 253,1	Inode: 5245462     Links: 1
	I0422 17:52:02.213185   48612 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 17:52:02.213191   48612 command_runner.go:130] > Access: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213196   48612 command_runner.go:130] > Modify: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213200   48612 command_runner.go:130] > Change: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213205   48612 command_runner.go:130] >  Birth: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213254   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 17:52:02.219264   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.219321   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 17:52:02.225324   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.225516   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 17:52:02.231597   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.231663   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 17:52:02.237589   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.237635   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 17:52:02.243516   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.243585   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 17:52:02.249138   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.249327   48612 kubeadm.go:391] StartCluster: {Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:52:02.249445   48612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 17:52:02.249488   48612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 17:52:02.293252   48612 command_runner.go:130] > 8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b
	I0422 17:52:02.293283   48612 command_runner.go:130] > 5a941051f7430fa5546b0dc808e18736747e102e993cdd06453fda10c2cc8aa8
	I0422 17:52:02.293292   48612 command_runner.go:130] > 2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf
	I0422 17:52:02.293300   48612 command_runner.go:130] > 0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc
	I0422 17:52:02.293334   48612 command_runner.go:130] > cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9
	I0422 17:52:02.293461   48612 command_runner.go:130] > d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c
	I0422 17:52:02.293512   48612 command_runner.go:130] > 70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd
	I0422 17:52:02.293530   48612 command_runner.go:130] > 04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238
	I0422 17:52:02.293581   48612 command_runner.go:130] > 809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227
	I0422 17:52:02.295148   48612 cri.go:89] found id: "8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b"
	I0422 17:52:02.295165   48612 cri.go:89] found id: "5a941051f7430fa5546b0dc808e18736747e102e993cdd06453fda10c2cc8aa8"
	I0422 17:52:02.295169   48612 cri.go:89] found id: "2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf"
	I0422 17:52:02.295173   48612 cri.go:89] found id: "0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc"
	I0422 17:52:02.295175   48612 cri.go:89] found id: "cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9"
	I0422 17:52:02.295179   48612 cri.go:89] found id: "d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c"
	I0422 17:52:02.295188   48612 cri.go:89] found id: "70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd"
	I0422 17:52:02.295191   48612 cri.go:89] found id: "04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238"
	I0422 17:52:02.295193   48612 cri.go:89] found id: "809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227"
	I0422 17:52:02.295198   48612 cri.go:89] found id: ""
	I0422 17:52:02.295235   48612 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.854433453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12ef0350-646f-40dc-9e34-71f8d7c73a62 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.855864688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a848bfa7-7077-42eb-9ca7-ae848f2fc209 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.856514090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808409856489455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a848bfa7-7077-42eb-9ca7-ae848f2fc209 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.857063852Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79785859-dd87-4f78-98c4-fa566dd0a4ea name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.857238575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79785859-dd87-4f78-98c4-fa566dd0a4ea name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.857606568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79785859-dd87-4f78-98c4-fa566dd0a4ea name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.901632474Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c24a83ed-3b74-40d0-be74-3d342522d883 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.901706410Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c24a83ed-3b74-40d0-be74-3d342522d883 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.903520034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fec37682-56a3-4912-aaf8-8395a6b71a17 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.904055768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808409904031917,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fec37682-56a3-4912-aaf8-8395a6b71a17 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.904694506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=614853b1-48fb-4e12-b6e0-21a0bf401e3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.904770128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=614853b1-48fb-4e12-b6e0-21a0bf401e3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.905116763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=614853b1-48fb-4e12-b6e0-21a0bf401e3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.941235891Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce7ac803-116e-42da-8dd8-6e32d33d3fe5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.941517947Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-bl7n4,Uid:e8c1f2b8-194c-4567-93d8-77a38ede22cc,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808361714601292,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:52:14.868466225Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&PodSandboxMetadata{Name:kube-proxy-brdh6,Uid:b111ab97-6b54-4006-bc09-ac158419ceb0,Namespace:kube-system,Attempt:1,},State:S
ANDBOX_READY,CreatedAt:1713808327959608898,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac158419ceb0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:45:33.084029324Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:74c83b5d-e7bf-46d9-bf28-be78b4e89874,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808327955536118,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotat
ions:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-22T17:46:05.627236357Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-704531,Uid:fff3f898b6674ab09e5d63885ec6
b689,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808327952513806,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.41:8443,kubernetes.io/config.hash: fff3f898b6674ab09e5d63885ec6b689,kubernetes.io/config.seen: 2024-04-22T17:45:20.438555104Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&PodSandboxMetadata{Name:kindnet-fpnzz,Uid:9555b728-9998-4aa2-8c3c-5fb759a4b19f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808327942520636,Labels:map[string]string{app: kindnet,controller-revision-hash: 64fdfd5c6d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kin
dnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:45:33.108495925Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&PodSandboxMetadata{Name:etcd-multinode-704531,Uid:de3430ee01f0bb1d67323a9ff296b867,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808327931711023,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.41:2379,kubernetes.io/config.hash: de3430ee01f0bb1d67323a9ff296b867,kubernetes.io/config.seen: 2024-04-22T17:45:20.438
550966Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-704531,Uid:2fb29928136c8425bfab46a4e157ddb8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808327920494561,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2fb29928136c8425bfab46a4e157ddb8,kubernetes.io/config.seen: 2024-04-22T17:45:20.438557096Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-704531,Uid:95b7daaeea1c41dfd355334ee34a430c,Namespace:kube-system,Attempt:1,},State:S
ANDBOX_READY,CreatedAt:1713808327918448593,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 95b7daaeea1c41dfd355334ee34a430c,kubernetes.io/config.seen: 2024-04-22T17:45:20.438556224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-b9mkg,Uid:4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713808321854577867,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,k8s-app: kube-dns,pod-templa
te-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T17:46:05.631883326Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ce7ac803-116e-42da-8dd8-6e32d33d3fe5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.942255315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c5f0c82-a32b-4f16-91c2-992ac71f2776 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.942317380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c5f0c82-a32b-4f16-91c2-992ac71f2776 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.942506277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c5f0c82-a32b-4f16-91c2-992ac71f2776 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.951410951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8a55e77d-0a4c-4d97-8f2e-a067375282fa name=/runtime.v1.RuntimeService/Version
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.951498024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8a55e77d-0a4c-4d97-8f2e-a067375282fa name=/runtime.v1.RuntimeService/Version
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.953023645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8169070-8440-4714-a985-194e030b2935 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.953742574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808409953719430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8169070-8440-4714-a985-194e030b2935 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.954339247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe653c38-dfc4-4b5c-a5f4-116d89b1edf9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.954390569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe653c38-dfc4-4b5c-a5f4-116d89b1edf9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:53:29 multinode-704531 crio[2847]: time="2024-04-22 17:53:29.954723935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe653c38-dfc4-4b5c-a5f4-116d89b1edf9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b5994c70bb640       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      48 seconds ago       Running             busybox                   1                   822725eae35bb       busybox-fc5497c4f-bl7n4
	cb2973be4400b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   09758ae980f26       coredns-7db6d8ff4d-b9mkg
	f1930947df2f1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      About a minute ago   Running             kube-proxy                1                   36a833c84c15a       kube-proxy-brdh6
	ac77b416192fe       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   1ff9ab84630ac       kindnet-fpnzz
	7162d60f78fe9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a495ae5921972       storage-provisioner
	98c3a104dcf75       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      About a minute ago   Running             kube-apiserver            1                   0086062aaf9f2       kube-apiserver-multinode-704531
	720e239dd404f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      About a minute ago   Running             kube-controller-manager   1                   01bc44ac6ed37       kube-controller-manager-multinode-704531
	d229100cdd1f9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      About a minute ago   Running             kube-scheduler            1                   f16dc9b042a41       kube-scheduler-multinode-704531
	0c4c78f87155c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   4579f0a9d1a17       etcd-multinode-704531
	8e0db90880f97       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   09758ae980f26       coredns-7db6d8ff4d-b9mkg
	cb9117e7d64a8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   a4e447cc6f932       busybox-fc5497c4f-bl7n4
	2524aeec685e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   7d0ec5828f1fc       storage-provisioner
	0f76791d387a0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago        Exited              kindnet-cni               0                   6b27f46bb9e6e       kindnet-fpnzz
	cc22cce807d1e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      7 minutes ago        Exited              kube-proxy                0                   66b4007bd8f47       kube-proxy-brdh6
	d49dfeca2d9d4       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      8 minutes ago        Exited              kube-scheduler            0                   702623bbb0434       kube-scheduler-multinode-704531
	70d0eea95ffd4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   b8e008b39b031       etcd-multinode-704531
	04c12d47455f2       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      8 minutes ago        Exited              kube-controller-manager   0                   a0df83489b2a0       kube-controller-manager-multinode-704531
	809aa2caf411e       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      8 minutes ago        Exited              kube-apiserver            0                   cd7cdc5dbff4a       kube-apiserver-multinode-704531
	
	
	==> coredns [8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43162 - 46839 "HINFO IN 851568339466806155.2057316636928923104. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007270821s
	
	
	==> coredns [cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56281 - 56367 "HINFO IN 4100348979122285734.3932258701465694796. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010155518s
	
	
	==> describe nodes <==
	Name:               multinode-704531
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-704531
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=multinode-704531
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_45_21_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:45:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-704531
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:53:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:52:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    multinode-704531
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c8791a2e0d64a44bf77be520827dcc7
	  System UUID:                7c8791a2-e0d6-4a44-bf77-be520827dcc7
	  Boot ID:                    6cf024ff-2a11-43e2-a21f-9b6e95f58a47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bl7n4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m40s
	  kube-system                 coredns-7db6d8ff4d-b9mkg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m57s
	  kube-system                 etcd-multinode-704531                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m10s
	  kube-system                 kindnet-fpnzz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m57s
	  kube-system                 kube-apiserver-multinode-704531             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-controller-manager-multinode-704531    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-proxy-brdh6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  kube-system                 kube-scheduler-multinode-704531             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m55s                  kube-proxy       
	  Normal  Starting                 78s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  8m16s (x8 over 8m16s)  kubelet          Node multinode-704531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s (x8 over 8m16s)  kubelet          Node multinode-704531 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s (x7 over 8m16s)  kubelet          Node multinode-704531 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     8m10s                  kubelet          Node multinode-704531 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m10s                  kubelet          Node multinode-704531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m10s                  kubelet          Node multinode-704531 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m10s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m57s                  node-controller  Node multinode-704531 event: Registered Node multinode-704531 in Controller
	  Normal  NodeReady                7m25s                  kubelet          Node multinode-704531 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    77s                    kubelet          Node multinode-704531 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s                    kubelet          Node multinode-704531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     77s                    kubelet          Node multinode-704531 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             76s                    kubelet          Node multinode-704531 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  76s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                    node-controller  Node multinode-704531 event: Registered Node multinode-704531 in Controller
	  Normal  NodeReady                66s                    kubelet          Node multinode-704531 status is now: NodeReady
	
	
	Name:               multinode-704531-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-704531-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=multinode-704531
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_52_48_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:52:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-704531-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:52:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:52:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:52:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    multinode-704531-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27c436d8b3ce4832a2682e4f6df119a2
	  System UUID:                27c436d8-b3ce-4832-a268-2e4f6df119a2
	  Boot ID:                    d93f299e-08cc-426d-a738-6784ee94be7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xppng    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-qtksj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m52s
	  kube-system                 kube-proxy-pdfj9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m46s                  kube-proxy  
	  Normal  Starting                 37s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m52s (x3 over 6m52s)  kubelet     Node multinode-704531-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m52s (x3 over 6m52s)  kubelet     Node multinode-704531-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m52s (x3 over 6m52s)  kubelet     Node multinode-704531-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m52s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m42s                  kubelet     Node multinode-704531-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  42s (x2 over 42s)      kubelet     Node multinode-704531-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x2 over 42s)      kubelet     Node multinode-704531-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x2 over 42s)      kubelet     Node multinode-704531-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                33s                    kubelet     Node multinode-704531-m02 status is now: NodeReady
	
	
	Name:               multinode-704531-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-704531-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=multinode-704531
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_53_18_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:53:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-704531-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:53:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:53:26 +0000   Mon, 22 Apr 2024 17:53:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:53:26 +0000   Mon, 22 Apr 2024 17:53:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:53:26 +0000   Mon, 22 Apr 2024 17:53:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:53:26 +0000   Mon, 22 Apr 2024 17:53:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.141
	  Hostname:    multinode-704531-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 405c005656e24e1b948c867b4f40fb8e
	  System UUID:                405c0056-56e2-4e1b-948c-867b4f40fb8e
	  Boot ID:                    f76ebeb5-ade4-417b-9dee-bce82c295552
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-tq7ss       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m5s
	  kube-system                 kube-proxy-kr7f2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m59s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  Starting                 5m19s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m6s (x2 over 6m6s)    kubelet     Node multinode-704531-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x2 over 6m6s)    kubelet     Node multinode-704531-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x2 over 6m6s)    kubelet     Node multinode-704531-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m5s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m55s                  kubelet     Node multinode-704531-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m24s (x2 over 5m24s)  kubelet     Node multinode-704531-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m24s (x2 over 5m24s)  kubelet     Node multinode-704531-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m24s (x2 over 5m24s)  kubelet     Node multinode-704531-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m16s                  kubelet     Node multinode-704531-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  13s (x2 over 13s)      kubelet     Node multinode-704531-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x2 over 13s)      kubelet     Node multinode-704531-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x2 over 13s)      kubelet     Node multinode-704531-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-704531-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.170755] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.152802] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.309710] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.586925] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.059983] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.647642] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.365284] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.194570] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.080365] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.679981] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[  +0.096222] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 17:46] kauditd_printk_skb: 60 callbacks suppressed
	[ +45.265896] kauditd_printk_skb: 12 callbacks suppressed
	[Apr22 17:51] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.155412] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +0.172746] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.148128] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.280151] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[Apr22 17:52] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +0.083106] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.428699] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.488416] systemd-fstab-generator[3811]: Ignoring "noauto" option for root device
	[  +0.094207] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.007370] systemd-fstab-generator[3930]: Ignoring "noauto" option for root device
	[  +8.089790] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf] <==
	{"level":"info","ts":"2024-04-22T17:52:08.713125Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T17:52:08.713136Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T17:52:08.713421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 switched to configuration voters=(10393760029520308295)"}
	{"level":"info","ts":"2024-04-22T17:52:08.713509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","added-peer-id":"903e0dada8362847","added-peer-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-04-22T17:52:08.713638Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T17:52:08.713683Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T17:52:08.720971Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T17:52:08.722739Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"903e0dada8362847","initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T17:52:08.72304Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T17:52:08.723325Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:52:08.726293Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:52:10.273973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T17:52:10.274033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T17:52:10.274076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-04-22T17:52:10.27409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.274096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.274104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.274114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.275726Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:multinode-704531 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T17:52:10.275816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T17:52:10.275883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T17:52:10.277252Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T17:52:10.277301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T17:52:10.277957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	{"level":"info","ts":"2024-04-22T17:52:10.2795Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd] <==
	{"level":"info","ts":"2024-04-22T17:45:15.772324Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T17:45:15.773235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T17:45:15.776638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T17:46:38.382401Z","caller":"traceutil/trace.go:171","msg":"trace[1575186716] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"252.751837ms","start":"2024-04-22T17:46:38.129617Z","end":"2024-04-22T17:46:38.382369Z","steps":["trace[1575186716] 'process raft request'  (duration: 246.716948ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:46:42.397093Z","caller":"traceutil/trace.go:171","msg":"trace[204413692] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:554; }","duration":"140.270643ms","start":"2024-04-22T17:46:42.256789Z","end":"2024-04-22T17:46:42.397059Z","steps":["trace[204413692] 'read index received'  (duration: 140.099653ms)","trace[204413692] 'applied index is now lower than readState.Index'  (duration: 170.389µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:46:42.397403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.542392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-704531-m02\" ","response":"range_response_count:1 size:2823"}
	{"level":"info","ts":"2024-04-22T17:46:42.397484Z","caller":"traceutil/trace.go:171","msg":"trace[2046346196] range","detail":"{range_begin:/registry/minions/multinode-704531-m02; range_end:; response_count:1; response_revision:526; }","duration":"140.70518ms","start":"2024-04-22T17:46:42.256765Z","end":"2024-04-22T17:46:42.39747Z","steps":["trace[2046346196] 'agreement among raft nodes before linearized reading'  (duration: 140.491818ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:46:42.397624Z","caller":"traceutil/trace.go:171","msg":"trace[1512357095] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"181.704772ms","start":"2024-04-22T17:46:42.215912Z","end":"2024-04-22T17:46:42.397617Z","steps":["trace[1512357095] 'process raft request'  (duration: 181.030703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:47:25.14486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.760642ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2902445744713118401 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-704531-m03.17c8ab56300b9afc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-704531-m03.17c8ab56300b9afc\" value_size:646 lease:2902445744713117999 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T17:47:25.145013Z","caller":"traceutil/trace.go:171","msg":"trace[291727974] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"163.75079ms","start":"2024-04-22T17:47:24.981245Z","end":"2024-04-22T17:47:25.144996Z","steps":["trace[291727974] 'process raft request'  (duration: 163.704588ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:47:25.14523Z","caller":"traceutil/trace.go:171","msg":"trace[802930937] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"246.157952ms","start":"2024-04-22T17:47:24.899063Z","end":"2024-04-22T17:47:25.145221Z","steps":["trace[802930937] 'process raft request'  (duration: 143.300326ms)","trace[802930937] 'compare'  (duration: 101.665474ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:47:25.145418Z","caller":"traceutil/trace.go:171","msg":"trace[396834445] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:660; }","duration":"197.340143ms","start":"2024-04-22T17:47:24.948071Z","end":"2024-04-22T17:47:25.145411Z","steps":["trace[396834445] 'read index received'  (duration: 94.302241ms)","trace[396834445] 'applied index is now lower than readState.Index'  (duration: 103.036513ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:47:25.145622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.538166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-704531-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-22T17:47:25.145667Z","caller":"traceutil/trace.go:171","msg":"trace[580240549] range","detail":"{range_begin:/registry/minions/multinode-704531-m03; range_end:; response_count:1; response_revision:623; }","duration":"197.593375ms","start":"2024-04-22T17:47:24.948066Z","end":"2024-04-22T17:47:25.145659Z","steps":["trace[580240549] 'agreement among raft nodes before linearized reading'  (duration: 197.489587ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:47:28.923508Z","caller":"traceutil/trace.go:171","msg":"trace[281588394] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"135.745117ms","start":"2024-04-22T17:47:28.787743Z","end":"2024-04-22T17:47:28.923488Z","steps":["trace[281588394] 'process raft request'  (duration: 135.583568ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:50:20.825479Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T17:50:20.825605Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-704531","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"]}
	{"level":"warn","ts":"2024-04-22T17:50:20.825778Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:50:20.82586Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:50:20.869218Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:50:20.869285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T17:50:20.869352Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"903e0dada8362847","current-leader-member-id":"903e0dada8362847"}
	{"level":"info","ts":"2024-04-22T17:50:20.875465Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:50:20.875603Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:50:20.875615Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-704531","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"]}
	
	
	==> kernel <==
	 17:53:30 up 8 min,  0 users,  load average: 0.30, 0.22, 0.11
	Linux multinode-704531 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc] <==
	I0422 17:49:35.543258       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:49:45.550953       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:49:45.550998       1 main.go:227] handling current node
	I0422 17:49:45.551019       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:49:45.551032       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:49:45.551142       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:49:45.551223       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:49:55.564056       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:49:55.564107       1 main.go:227] handling current node
	I0422 17:49:55.564118       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:49:55.564131       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:49:55.564391       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:49:55.564422       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:50:05.572840       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:50:05.573043       1 main.go:227] handling current node
	I0422 17:50:05.573090       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:50:05.573112       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:50:05.573314       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:50:05.573350       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:50:15.578621       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:50:15.578711       1 main.go:227] handling current node
	I0422 17:50:15.578735       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:50:15.578753       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:50:15.578884       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:50:15.578910       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb] <==
	I0422 17:52:41.750044       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:52:51.763826       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:52:51.763885       1 main.go:227] handling current node
	I0422 17:52:51.763902       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:52:51.763920       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:52:51.764072       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:52:51.764111       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:53:01.777855       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:53:01.777905       1 main.go:227] handling current node
	I0422 17:53:01.777932       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:53:01.777938       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:53:01.778060       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:53:01.778091       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:53:11.784559       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:53:11.784667       1 main.go:227] handling current node
	I0422 17:53:11.784702       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:53:11.784728       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:53:11.784894       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:53:11.784932       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:53:21.794321       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:53:21.794637       1 main.go:227] handling current node
	I0422 17:53:21.794701       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:53:21.794731       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:53:21.794894       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:53:21.794936       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227] <==
	E0422 17:50:20.829551       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0422 17:50:20.841501       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.859114       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.859921       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860015       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860085       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860142       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860297       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860572       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0422 17:50:20.861051       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0422 17:50:20.861492       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0422 17:50:20.861638       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0422 17:50:20.861820       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0422 17:50:20.861911       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.861987       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862054       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862117       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.859925       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862866       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862924       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.863011       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0422 17:50:20.863319       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	W0422 17:50:20.863365       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862891       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.863408       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861] <==
	I0422 17:52:11.620817       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 17:52:11.625810       1 aggregator.go:165] initial CRD sync complete...
	I0422 17:52:11.626019       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 17:52:11.626090       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 17:52:11.668307       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 17:52:11.671432       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 17:52:11.672702       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 17:52:11.675911       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 17:52:11.676003       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 17:52:11.676283       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 17:52:11.683972       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0422 17:52:11.688292       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0422 17:52:11.721119       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 17:52:11.724386       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 17:52:11.724407       1 policy_source.go:224] refreshing policies
	I0422 17:52:11.733009       1 cache.go:39] Caches are synced for autoregister controller
	I0422 17:52:11.756731       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 17:52:12.580263       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 17:52:14.515539       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 17:52:14.630237       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 17:52:14.642031       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 17:52:14.705854       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 17:52:14.717083       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 17:52:24.579540       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 17:52:24.687704       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238] <==
	I0422 17:46:08.049094       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0422 17:46:38.432565       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m02\" does not exist"
	I0422 17:46:38.443525       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m02" podCIDRs=["10.244.1.0/24"]
	I0422 17:46:43.053854       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-704531-m02"
	I0422 17:46:48.491608       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:46:50.882528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.1656ms"
	I0422 17:46:50.933434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.295014ms"
	I0422 17:46:50.933542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.497µs"
	I0422 17:46:53.915248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.385563ms"
	I0422 17:46:53.915329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.133µs"
	I0422 17:46:54.234543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.376336ms"
	I0422 17:46:54.235128       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="227.631µs"
	I0422 17:47:25.147777       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m03\" does not exist"
	I0422 17:47:25.148014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:47:25.214555       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m03" podCIDRs=["10.244.2.0/24"]
	I0422 17:47:28.072770       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-704531-m03"
	I0422 17:47:35.280364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:05.701989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:06.847249       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m03\" does not exist"
	I0422 17:48:06.847357       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:06.873550       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m03" podCIDRs=["10.244.3.0/24"]
	I0422 17:48:15.020141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:53.128333       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m03"
	I0422 17:48:53.193584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.083181ms"
	I0422 17:48:53.194153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.897µs"
	
	
	==> kube-controller-manager [720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d] <==
	I0422 17:52:25.201350       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 17:52:25.204274       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 17:52:43.741962       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.580987ms"
	I0422 17:52:43.751357       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.259179ms"
	I0422 17:52:43.771555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.154866ms"
	I0422 17:52:43.771824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.352µs"
	I0422 17:52:46.052918       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.379µs"
	I0422 17:52:48.170098       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m02\" does not exist"
	I0422 17:52:48.180524       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m02" podCIDRs=["10.244.1.0/24"]
	I0422 17:52:50.062198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.327µs"
	I0422 17:52:50.112417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.161µs"
	I0422 17:52:50.124604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.323µs"
	I0422 17:52:50.138471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.205µs"
	I0422 17:52:50.147807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.392µs"
	I0422 17:52:50.151440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.87µs"
	I0422 17:52:57.280621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:52:57.304025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.844µs"
	I0422 17:52:57.325089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.294µs"
	I0422 17:53:01.182981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.088833ms"
	I0422 17:53:01.190941       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.394565ms"
	I0422 17:53:16.519856       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:53:17.688343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:53:17.688436       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m03\" does not exist"
	I0422 17:53:17.718108       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m03" podCIDRs=["10.244.2.0/24"]
	I0422 17:53:26.830125       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	
	
	==> kube-proxy [cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9] <==
	I0422 17:45:34.873585       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:45:34.973932       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	I0422 17:45:35.072662       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:45:35.072757       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:45:35.072785       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:45:35.075587       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:45:35.075816       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:45:35.076014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:45:35.077033       1 config.go:192] "Starting service config controller"
	I0422 17:45:35.077119       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:45:35.077338       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:45:35.077376       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:45:35.078102       1 config.go:319] "Starting node config controller"
	I0422 17:45:35.079287       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:45:35.178219       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:45:35.178301       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:45:35.179724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839] <==
	I0422 17:52:10.033837       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:52:11.662083       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	I0422 17:52:11.737767       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:52:11.737832       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:52:11.737849       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:52:11.742695       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:52:11.742981       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:52:11.743014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:52:11.744692       1 config.go:192] "Starting service config controller"
	I0422 17:52:11.744728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:52:11.744752       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:52:11.744755       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:52:11.745115       1 config.go:319] "Starting node config controller"
	I0422 17:52:11.745147       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:52:11.845559       1 shared_informer.go:320] Caches are synced for node config
	I0422 17:52:11.845564       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:52:11.845628       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795] <==
	W0422 17:52:11.637465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:52:11.637558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 17:52:11.637798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:52:11.637902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 17:52:11.638037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:52:11.638113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:52:11.638437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.638473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.638374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.638571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.643272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:52:11.643383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:52:11.643595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:52:11.643630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:52:11.643938       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.644075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.644108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.644240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.644254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:52:11.644268       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 17:52:11.644366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:52:11.644453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:52:11.644482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:52:11.644580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0422 17:52:12.614541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c] <==
	E0422 17:45:18.403932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:45:18.420361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:45:18.420456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:45:18.439957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:45:18.440048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 17:45:18.523766       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:45:18.523900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 17:45:18.582840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:45:18.583036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:45:18.673822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 17:45:18.673937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 17:45:18.721559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:45:18.721705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 17:45:18.742956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:45:18.743078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:45:18.765709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 17:45:18.765827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 17:45:18.786274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 17:45:18.786386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 17:45:18.907228       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:45:18.907283       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 17:45:18.907239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:45:18.907307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0422 17:45:21.514549       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 17:50:20.837733       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.334544    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fb29928136c8425bfab46a4e157ddb8-kubeconfig\") pod \"kube-scheduler-multinode-704531\" (UID: \"2fb29928136c8425bfab46a4e157ddb8\") " pod="kube-system/kube-scheduler-multinode-704531"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.334560    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fff3f898b6674ab09e5d63885ec6b689-k8s-certs\") pod \"kube-apiserver-multinode-704531\" (UID: \"fff3f898b6674ab09e5d63885ec6b689\") " pod="kube-system/kube-apiserver-multinode-704531"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.334580    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fff3f898b6674ab09e5d63885ec6b689-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-704531\" (UID: \"fff3f898b6674ab09e5d63885ec6b689\") " pod="kube-system/kube-apiserver-multinode-704531"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.865389    3818 apiserver.go:52] "Watching apiserver"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.868769    3818 topology_manager.go:215] "Topology Admit Handler" podUID="b111ab97-6b54-4006-bc09-ac158419ceb0" podNamespace="kube-system" podName="kube-proxy-brdh6"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.868972    3818 topology_manager.go:215] "Topology Admit Handler" podUID="4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b9mkg"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.869050    3818 topology_manager.go:215] "Topology Admit Handler" podUID="9555b728-9998-4aa2-8c3c-5fb759a4b19f" podNamespace="kube-system" podName="kindnet-fpnzz"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.869125    3818 topology_manager.go:215] "Topology Admit Handler" podUID="74c83b5d-e7bf-46d9-bf28-be78b4e89874" podNamespace="kube-system" podName="storage-provisioner"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.869222    3818 topology_manager.go:215] "Topology Admit Handler" podUID="e8c1f2b8-194c-4567-93d8-77a38ede22cc" podNamespace="default" podName="busybox-fc5497c4f-bl7n4"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.915345    3818 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938305    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/74c83b5d-e7bf-46d9-bf28-be78b4e89874-tmp\") pod \"storage-provisioner\" (UID: \"74c83b5d-e7bf-46d9-bf28-be78b4e89874\") " pod="kube-system/storage-provisioner"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938583    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b111ab97-6b54-4006-bc09-ac158419ceb0-xtables-lock\") pod \"kube-proxy-brdh6\" (UID: \"b111ab97-6b54-4006-bc09-ac158419ceb0\") " pod="kube-system/kube-proxy-brdh6"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938763    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9555b728-9998-4aa2-8c3c-5fb759a4b19f-cni-cfg\") pod \"kindnet-fpnzz\" (UID: \"9555b728-9998-4aa2-8c3c-5fb759a4b19f\") " pod="kube-system/kindnet-fpnzz"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938880    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9555b728-9998-4aa2-8c3c-5fb759a4b19f-xtables-lock\") pod \"kindnet-fpnzz\" (UID: \"9555b728-9998-4aa2-8c3c-5fb759a4b19f\") " pod="kube-system/kindnet-fpnzz"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.939004    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b111ab97-6b54-4006-bc09-ac158419ceb0-lib-modules\") pod \"kube-proxy-brdh6\" (UID: \"b111ab97-6b54-4006-bc09-ac158419ceb0\") " pod="kube-system/kube-proxy-brdh6"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.939209    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9555b728-9998-4aa2-8c3c-5fb759a4b19f-lib-modules\") pod \"kindnet-fpnzz\" (UID: \"9555b728-9998-4aa2-8c3c-5fb759a4b19f\") " pod="kube-system/kindnet-fpnzz"
	Apr 22 17:52:15 multinode-704531 kubelet[3818]: E0422 17:52:15.166100    3818 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-704531\" already exists" pod="kube-system/kube-apiserver-multinode-704531"
	Apr 22 17:52:15 multinode-704531 kubelet[3818]: I0422 17:52:15.170546    3818 scope.go:117] "RemoveContainer" containerID="8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b"
	Apr 22 17:52:15 multinode-704531 kubelet[3818]: E0422 17:52:15.189867    3818 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-704531\" already exists" pod="kube-system/kube-controller-manager-multinode-704531"
	Apr 22 17:52:18 multinode-704531 kubelet[3818]: I0422 17:52:18.533624    3818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 17:53:13 multinode-704531 kubelet[3818]: E0422 17:53:13.956737    3818 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:53:29.506469   49670 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-704531 -n multinode-704531
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-704531 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (313.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 stop
E0422 17:54:22.051621   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:55:07.901879   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-704531 stop: exit status 82 (2m0.48609313s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-704531-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-704531 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-704531 status: exit status 3 (18.785514864s)

                                                
                                                
-- stdout --
	multinode-704531
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-704531-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:55:53.079575   50330 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host
	E0422 17:55:53.079628   50330 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.19:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-704531 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-704531 -n multinode-704531
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-704531 logs -n 25: (1.542874928s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531:/home/docker/cp-test_multinode-704531-m02_multinode-704531.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531 sudo cat                                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m02_multinode-704531.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03:/home/docker/cp-test_multinode-704531-m02_multinode-704531-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531-m03 sudo cat                                   | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m02_multinode-704531-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp testdata/cp-test.txt                                                | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile478955910/001/cp-test_multinode-704531-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531:/home/docker/cp-test_multinode-704531-m03_multinode-704531.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531 sudo cat                                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m03_multinode-704531.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02:/home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531-m02 sudo cat                                   | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-704531 node stop m03                                                          | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	| node    | multinode-704531 node start                                                             | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:48 UTC |                     |
	| stop    | -p multinode-704531                                                                     | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:48 UTC |                     |
	| start   | -p multinode-704531                                                                     | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:50 UTC | 22 Apr 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC |                     |
	| node    | multinode-704531 node delete                                                            | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC | 22 Apr 24 17:53 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-704531 stop                                                                   | multinode-704531 | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 17:50:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 17:50:19.843688   48612 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:50:19.844232   48612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:50:19.844252   48612 out.go:304] Setting ErrFile to fd 2...
	I0422 17:50:19.844258   48612 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:50:19.844731   48612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:50:19.845680   48612 out.go:298] Setting JSON to false
	I0422 17:50:19.846712   48612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5565,"bootTime":1713802655,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:50:19.846777   48612 start.go:139] virtualization: kvm guest
	I0422 17:50:19.848615   48612 out.go:177] * [multinode-704531] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:50:19.850361   48612 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:50:19.850363   48612 notify.go:220] Checking for updates...
	I0422 17:50:19.851846   48612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:50:19.853317   48612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:50:19.854556   48612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:50:19.855752   48612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:50:19.856827   48612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:50:19.858500   48612 config.go:182] Loaded profile config "multinode-704531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:50:19.858588   48612 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:50:19.859020   48612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:50:19.859060   48612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:50:19.873670   48612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46171
	I0422 17:50:19.874043   48612 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:50:19.874599   48612 main.go:141] libmachine: Using API Version  1
	I0422 17:50:19.874619   48612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:50:19.874974   48612 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:50:19.875170   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:50:19.910342   48612 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 17:50:19.911625   48612 start.go:297] selected driver: kvm2
	I0422 17:50:19.911640   48612 start.go:901] validating driver "kvm2" against &{Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:50:19.911764   48612 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:50:19.912057   48612 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:50:19.912118   48612 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 17:50:19.927667   48612 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 17:50:19.928319   48612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 17:50:19.928384   48612 cni.go:84] Creating CNI manager for ""
	I0422 17:50:19.928396   48612 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 17:50:19.928458   48612 start.go:340] cluster config:
	{Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-704531 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:50:19.928582   48612 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 17:50:19.930932   48612 out.go:177] * Starting "multinode-704531" primary control-plane node in "multinode-704531" cluster
	I0422 17:50:19.932058   48612 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:50:19.932095   48612 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 17:50:19.932112   48612 cache.go:56] Caching tarball of preloaded images
	I0422 17:50:19.932180   48612 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 17:50:19.932191   48612 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 17:50:19.932323   48612 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/config.json ...
	I0422 17:50:19.932506   48612 start.go:360] acquireMachinesLock for multinode-704531: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 17:50:19.932546   48612 start.go:364] duration metric: took 22.372µs to acquireMachinesLock for "multinode-704531"
	I0422 17:50:19.932567   48612 start.go:96] Skipping create...Using existing machine configuration
	I0422 17:50:19.932574   48612 fix.go:54] fixHost starting: 
	I0422 17:50:19.932837   48612 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:50:19.932877   48612 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:50:19.946948   48612 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0422 17:50:19.947360   48612 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:50:19.947834   48612 main.go:141] libmachine: Using API Version  1
	I0422 17:50:19.947855   48612 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:50:19.948210   48612 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:50:19.948381   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:50:19.948521   48612 main.go:141] libmachine: (multinode-704531) Calling .GetState
	I0422 17:50:19.950064   48612 fix.go:112] recreateIfNeeded on multinode-704531: state=Running err=<nil>
	W0422 17:50:19.950094   48612 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 17:50:19.952158   48612 out.go:177] * Updating the running kvm2 "multinode-704531" VM ...
	I0422 17:50:19.953397   48612 machine.go:94] provisionDockerMachine start ...
	I0422 17:50:19.953413   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:50:19.953609   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:19.956211   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:19.956629   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:19.956649   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:19.956784   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:19.956954   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:19.957101   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:19.957206   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:19.957328   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:19.957513   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:19.957524   48612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 17:50:20.067688   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-704531
	
	I0422 17:50:20.067715   48612 main.go:141] libmachine: (multinode-704531) Calling .GetMachineName
	I0422 17:50:20.067975   48612 buildroot.go:166] provisioning hostname "multinode-704531"
	I0422 17:50:20.067995   48612 main.go:141] libmachine: (multinode-704531) Calling .GetMachineName
	I0422 17:50:20.068243   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.071396   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.071781   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.071815   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.071966   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.072170   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.072351   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.072492   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.072671   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:20.072897   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:20.072915   48612 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-704531 && echo "multinode-704531" | sudo tee /etc/hostname
	I0422 17:50:20.188990   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-704531
	
	I0422 17:50:20.189019   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.191682   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.192007   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.192041   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.192164   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.192362   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.192527   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.192669   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.192845   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:20.193037   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:20.193061   48612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-704531' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-704531/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-704531' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 17:50:20.292284   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 17:50:20.292311   48612 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 17:50:20.292332   48612 buildroot.go:174] setting up certificates
	I0422 17:50:20.292342   48612 provision.go:84] configureAuth start
	I0422 17:50:20.292353   48612 main.go:141] libmachine: (multinode-704531) Calling .GetMachineName
	I0422 17:50:20.292622   48612 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:50:20.295550   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.295970   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.296000   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.296122   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.298381   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.298794   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.298831   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.299067   48612 provision.go:143] copyHostCerts
	I0422 17:50:20.299096   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:50:20.299141   48612 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 17:50:20.299153   48612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 17:50:20.299233   48612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 17:50:20.299364   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:50:20.299397   48612 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 17:50:20.299407   48612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 17:50:20.299448   48612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 17:50:20.299528   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:50:20.299551   48612 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 17:50:20.299561   48612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 17:50:20.299595   48612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 17:50:20.299665   48612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.multinode-704531 san=[127.0.0.1 192.168.39.41 localhost minikube multinode-704531]
	I0422 17:50:20.521271   48612 provision.go:177] copyRemoteCerts
	I0422 17:50:20.521348   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 17:50:20.521386   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.524527   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.524910   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.524954   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.525158   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.525342   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.525525   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.525657   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:50:20.615922   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0422 17:50:20.615993   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 17:50:20.643917   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0422 17:50:20.643987   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 17:50:20.677133   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0422 17:50:20.677223   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0422 17:50:20.704850   48612 provision.go:87] duration metric: took 412.494646ms to configureAuth
	I0422 17:50:20.704875   48612 buildroot.go:189] setting minikube options for container-runtime
	I0422 17:50:20.705165   48612 config.go:182] Loaded profile config "multinode-704531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:50:20.705283   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:50:20.707758   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.708147   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:50:20.708179   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:50:20.708289   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:50:20.708532   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.708715   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:50:20.708880   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:50:20.709092   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:50:20.709254   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:50:20.709317   48612 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 17:51:51.422644   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 17:51:51.422681   48612 machine.go:97] duration metric: took 1m31.469273052s to provisionDockerMachine
	I0422 17:51:51.422696   48612 start.go:293] postStartSetup for "multinode-704531" (driver="kvm2")
	I0422 17:51:51.422709   48612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 17:51:51.422757   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.423077   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 17:51:51.423108   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.426246   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.426762   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.426782   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.426947   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.427138   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.427341   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.427505   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:51:51.511491   48612 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 17:51:51.515965   48612 command_runner.go:130] > NAME=Buildroot
	I0422 17:51:51.516000   48612 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0422 17:51:51.516006   48612 command_runner.go:130] > ID=buildroot
	I0422 17:51:51.516013   48612 command_runner.go:130] > VERSION_ID=2023.02.9
	I0422 17:51:51.516021   48612 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0422 17:51:51.516060   48612 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 17:51:51.516078   48612 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 17:51:51.516143   48612 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 17:51:51.516238   48612 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 17:51:51.516250   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /etc/ssl/certs/188842.pem
	I0422 17:51:51.516362   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 17:51:51.526355   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:51:51.553303   48612 start.go:296] duration metric: took 130.592949ms for postStartSetup
	I0422 17:51:51.553343   48612 fix.go:56] duration metric: took 1m31.620768585s for fixHost
	I0422 17:51:51.553361   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.556450   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.556821   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.556848   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.557040   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.557255   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.557412   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.557543   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.557735   48612 main.go:141] libmachine: Using SSH client type: native
	I0422 17:51:51.557918   48612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0422 17:51:51.557928   48612 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 17:51:51.656641   48612 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713808311.638286955
	
	I0422 17:51:51.656667   48612 fix.go:216] guest clock: 1713808311.638286955
	I0422 17:51:51.656677   48612 fix.go:229] Guest: 2024-04-22 17:51:51.638286955 +0000 UTC Remote: 2024-04-22 17:51:51.553346745 +0000 UTC m=+91.755843684 (delta=84.94021ms)
	I0422 17:51:51.656707   48612 fix.go:200] guest clock delta is within tolerance: 84.94021ms
	I0422 17:51:51.656712   48612 start.go:83] releasing machines lock for "multinode-704531", held for 1m31.724158207s
	I0422 17:51:51.656730   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.657020   48612 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:51:51.659553   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.659941   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.659967   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.660134   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.660679   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.660869   48612 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:51:51.660963   48612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 17:51:51.661004   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.661103   48612 ssh_runner.go:195] Run: cat /version.json
	I0422 17:51:51.661132   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:51:51.663695   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664027   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.664050   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664069   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664222   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.664409   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.664565   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.664610   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:51:51.664648   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:51:51.664696   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:51:51.664786   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:51:51.664940   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:51:51.665081   48612 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:51:51.665208   48612 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:51:51.773219   48612 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0422 17:51:51.773296   48612 command_runner.go:130] > {"iso_version": "v1.33.0-1713736271-18706", "kicbase_version": "v0.0.43-1713569670-18702", "minikube_version": "v1.33.0", "commit": "b9323c427b57f243371c998c7e6c1a23da9819a4"}
	I0422 17:51:51.773450   48612 ssh_runner.go:195] Run: systemctl --version
	I0422 17:51:51.779579   48612 command_runner.go:130] > systemd 252 (252)
	I0422 17:51:51.779622   48612 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0422 17:51:51.779675   48612 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 17:51:51.944487   48612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 17:51:51.953445   48612 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0422 17:51:51.953504   48612 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 17:51:51.953566   48612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 17:51:51.964532   48612 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 17:51:51.964562   48612 start.go:494] detecting cgroup driver to use...
	I0422 17:51:51.964617   48612 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 17:51:51.983265   48612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 17:51:51.998394   48612 docker.go:217] disabling cri-docker service (if available) ...
	I0422 17:51:51.998481   48612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 17:51:52.012603   48612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 17:51:52.026650   48612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 17:51:52.174809   48612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 17:51:52.322351   48612 docker.go:233] disabling docker service ...
	I0422 17:51:52.322428   48612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 17:51:52.339314   48612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 17:51:52.353867   48612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 17:51:52.501156   48612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 17:51:52.643723   48612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 17:51:52.657849   48612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 17:51:52.677713   48612 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0422 17:51:52.678207   48612 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 17:51:52.678273   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.689700   48612 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 17:51:52.689774   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.700614   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.711418   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.721969   48612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 17:51:52.732895   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.743446   48612 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.755068   48612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 17:51:52.765741   48612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 17:51:52.775624   48612 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0422 17:51:52.775710   48612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 17:51:52.785602   48612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:51:52.923233   48612 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 17:52:01.145001   48612 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.221728274s)
	I0422 17:52:01.145035   48612 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 17:52:01.145100   48612 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 17:52:01.150281   48612 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0422 17:52:01.150310   48612 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0422 17:52:01.150320   48612 command_runner.go:130] > Device: 0,22	Inode: 1304        Links: 1
	I0422 17:52:01.150330   48612 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 17:52:01.150338   48612 command_runner.go:130] > Access: 2024-04-22 17:52:01.085298137 +0000
	I0422 17:52:01.150354   48612 command_runner.go:130] > Modify: 2024-04-22 17:52:01.016296209 +0000
	I0422 17:52:01.150362   48612 command_runner.go:130] > Change: 2024-04-22 17:52:01.016296209 +0000
	I0422 17:52:01.150368   48612 command_runner.go:130] >  Birth: -
	I0422 17:52:01.150443   48612 start.go:562] Will wait 60s for crictl version
	I0422 17:52:01.150509   48612 ssh_runner.go:195] Run: which crictl
	I0422 17:52:01.154457   48612 command_runner.go:130] > /usr/bin/crictl
	I0422 17:52:01.154519   48612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 17:52:01.193804   48612 command_runner.go:130] > Version:  0.1.0
	I0422 17:52:01.193833   48612 command_runner.go:130] > RuntimeName:  cri-o
	I0422 17:52:01.193852   48612 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0422 17:52:01.193861   48612 command_runner.go:130] > RuntimeApiVersion:  v1
	I0422 17:52:01.193881   48612 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 17:52:01.193959   48612 ssh_runner.go:195] Run: crio --version
	I0422 17:52:01.225943   48612 command_runner.go:130] > crio version 1.29.1
	I0422 17:52:01.225964   48612 command_runner.go:130] > Version:        1.29.1
	I0422 17:52:01.225970   48612 command_runner.go:130] > GitCommit:      unknown
	I0422 17:52:01.225974   48612 command_runner.go:130] > GitCommitDate:  unknown
	I0422 17:52:01.225978   48612 command_runner.go:130] > GitTreeState:   clean
	I0422 17:52:01.225984   48612 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0422 17:52:01.225988   48612 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 17:52:01.225992   48612 command_runner.go:130] > Compiler:       gc
	I0422 17:52:01.225996   48612 command_runner.go:130] > Platform:       linux/amd64
	I0422 17:52:01.226000   48612 command_runner.go:130] > Linkmode:       dynamic
	I0422 17:52:01.226005   48612 command_runner.go:130] > BuildTags:      
	I0422 17:52:01.226010   48612 command_runner.go:130] >   containers_image_ostree_stub
	I0422 17:52:01.226016   48612 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 17:52:01.226021   48612 command_runner.go:130] >   btrfs_noversion
	I0422 17:52:01.226027   48612 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 17:52:01.226032   48612 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 17:52:01.226037   48612 command_runner.go:130] >   seccomp
	I0422 17:52:01.226045   48612 command_runner.go:130] > LDFlags:          unknown
	I0422 17:52:01.226055   48612 command_runner.go:130] > SeccompEnabled:   true
	I0422 17:52:01.226061   48612 command_runner.go:130] > AppArmorEnabled:  false
	I0422 17:52:01.226168   48612 ssh_runner.go:195] Run: crio --version
	I0422 17:52:01.256066   48612 command_runner.go:130] > crio version 1.29.1
	I0422 17:52:01.256093   48612 command_runner.go:130] > Version:        1.29.1
	I0422 17:52:01.256102   48612 command_runner.go:130] > GitCommit:      unknown
	I0422 17:52:01.256108   48612 command_runner.go:130] > GitCommitDate:  unknown
	I0422 17:52:01.256114   48612 command_runner.go:130] > GitTreeState:   clean
	I0422 17:52:01.256122   48612 command_runner.go:130] > BuildDate:      2024-04-22T03:47:45Z
	I0422 17:52:01.256136   48612 command_runner.go:130] > GoVersion:      go1.21.6
	I0422 17:52:01.256140   48612 command_runner.go:130] > Compiler:       gc
	I0422 17:52:01.256145   48612 command_runner.go:130] > Platform:       linux/amd64
	I0422 17:52:01.256149   48612 command_runner.go:130] > Linkmode:       dynamic
	I0422 17:52:01.256154   48612 command_runner.go:130] > BuildTags:      
	I0422 17:52:01.256159   48612 command_runner.go:130] >   containers_image_ostree_stub
	I0422 17:52:01.256163   48612 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0422 17:52:01.256176   48612 command_runner.go:130] >   btrfs_noversion
	I0422 17:52:01.256183   48612 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0422 17:52:01.256190   48612 command_runner.go:130] >   libdm_no_deferred_remove
	I0422 17:52:01.256196   48612 command_runner.go:130] >   seccomp
	I0422 17:52:01.256211   48612 command_runner.go:130] > LDFlags:          unknown
	I0422 17:52:01.256218   48612 command_runner.go:130] > SeccompEnabled:   true
	I0422 17:52:01.256227   48612 command_runner.go:130] > AppArmorEnabled:  false
	I0422 17:52:01.258973   48612 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 17:52:01.260464   48612 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:52:01.263224   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:52:01.263677   48612 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:52:01.263704   48612 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:52:01.263915   48612 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 17:52:01.268461   48612 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0422 17:52:01.268598   48612 kubeadm.go:877] updating cluster {Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 17:52:01.268759   48612 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 17:52:01.268817   48612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:52:01.314617   48612 command_runner.go:130] > {
	I0422 17:52:01.314637   48612 command_runner.go:130] >   "images": [
	I0422 17:52:01.314641   48612 command_runner.go:130] >     {
	I0422 17:52:01.314659   48612 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 17:52:01.314664   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.314673   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 17:52:01.314682   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314689   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.314703   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 17:52:01.314714   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 17:52:01.314719   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314726   48612 command_runner.go:130] >       "size": "65291810",
	I0422 17:52:01.314732   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.314741   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.314748   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.314757   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.314761   48612 command_runner.go:130] >     },
	I0422 17:52:01.314765   48612 command_runner.go:130] >     {
	I0422 17:52:01.314771   48612 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 17:52:01.314775   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.314780   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 17:52:01.314794   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314801   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.314809   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 17:52:01.314824   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 17:52:01.314834   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314841   48612 command_runner.go:130] >       "size": "1363676",
	I0422 17:52:01.314851   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.314862   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.314867   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.314874   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.314877   48612 command_runner.go:130] >     },
	I0422 17:52:01.314882   48612 command_runner.go:130] >     {
	I0422 17:52:01.314888   48612 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 17:52:01.314894   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.314899   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 17:52:01.314905   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314909   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.314923   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 17:52:01.314938   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 17:52:01.314948   48612 command_runner.go:130] >       ],
	I0422 17:52:01.314958   48612 command_runner.go:130] >       "size": "31470524",
	I0422 17:52:01.314967   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.314977   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.314985   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.314993   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.314997   48612 command_runner.go:130] >     },
	I0422 17:52:01.315003   48612 command_runner.go:130] >     {
	I0422 17:52:01.315009   48612 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 17:52:01.315015   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315021   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 17:52:01.315029   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315039   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315055   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 17:52:01.315080   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 17:52:01.315090   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315097   48612 command_runner.go:130] >       "size": "61245718",
	I0422 17:52:01.315111   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.315118   48612 command_runner.go:130] >       "username": "nonroot",
	I0422 17:52:01.315138   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315149   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315154   48612 command_runner.go:130] >     },
	I0422 17:52:01.315161   48612 command_runner.go:130] >     {
	I0422 17:52:01.315175   48612 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 17:52:01.315191   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315202   48612 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 17:52:01.315210   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315220   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315228   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 17:52:01.315242   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 17:52:01.315251   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315258   48612 command_runner.go:130] >       "size": "150779692",
	I0422 17:52:01.315267   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315275   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315283   48612 command_runner.go:130] >       },
	I0422 17:52:01.315290   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315300   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315310   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315316   48612 command_runner.go:130] >     },
	I0422 17:52:01.315324   48612 command_runner.go:130] >     {
	I0422 17:52:01.315332   48612 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 17:52:01.315342   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315351   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 17:52:01.315360   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315367   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315382   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 17:52:01.315402   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 17:52:01.315411   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315421   48612 command_runner.go:130] >       "size": "117609952",
	I0422 17:52:01.315430   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315436   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315440   48612 command_runner.go:130] >       },
	I0422 17:52:01.315449   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315466   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315481   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315490   48612 command_runner.go:130] >     },
	I0422 17:52:01.315498   48612 command_runner.go:130] >     {
	I0422 17:52:01.315511   48612 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 17:52:01.315520   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315527   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 17:52:01.315534   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315544   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315560   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 17:52:01.315577   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 17:52:01.315586   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315595   48612 command_runner.go:130] >       "size": "112170310",
	I0422 17:52:01.315604   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315613   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315621   48612 command_runner.go:130] >       },
	I0422 17:52:01.315628   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315634   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315643   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315652   48612 command_runner.go:130] >     },
	I0422 17:52:01.315658   48612 command_runner.go:130] >     {
	I0422 17:52:01.315671   48612 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 17:52:01.315680   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315692   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 17:52:01.315701   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315711   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315740   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 17:52:01.315762   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 17:52:01.315768   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315779   48612 command_runner.go:130] >       "size": "85932953",
	I0422 17:52:01.315788   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.315798   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315805   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315811   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315817   48612 command_runner.go:130] >     },
	I0422 17:52:01.315822   48612 command_runner.go:130] >     {
	I0422 17:52:01.315837   48612 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 17:52:01.315841   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315848   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 17:52:01.315853   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315860   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.315875   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 17:52:01.315887   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 17:52:01.315893   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315900   48612 command_runner.go:130] >       "size": "63026502",
	I0422 17:52:01.315905   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.315912   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.315917   48612 command_runner.go:130] >       },
	I0422 17:52:01.315922   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.315925   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.315930   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.315935   48612 command_runner.go:130] >     },
	I0422 17:52:01.315939   48612 command_runner.go:130] >     {
	I0422 17:52:01.315949   48612 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 17:52:01.315966   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.315976   48612 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 17:52:01.315981   48612 command_runner.go:130] >       ],
	I0422 17:52:01.315990   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.316002   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 17:52:01.316012   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 17:52:01.316016   48612 command_runner.go:130] >       ],
	I0422 17:52:01.316023   48612 command_runner.go:130] >       "size": "750414",
	I0422 17:52:01.316031   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.316038   48612 command_runner.go:130] >         "value": "65535"
	I0422 17:52:01.316047   48612 command_runner.go:130] >       },
	I0422 17:52:01.316054   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.316063   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.316072   48612 command_runner.go:130] >       "pinned": true
	I0422 17:52:01.316080   48612 command_runner.go:130] >     }
	I0422 17:52:01.316086   48612 command_runner.go:130] >   ]
	I0422 17:52:01.316094   48612 command_runner.go:130] > }
	I0422 17:52:01.316412   48612 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:52:01.316427   48612 crio.go:433] Images already preloaded, skipping extraction
	I0422 17:52:01.316483   48612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 17:52:01.350905   48612 command_runner.go:130] > {
	I0422 17:52:01.350927   48612 command_runner.go:130] >   "images": [
	I0422 17:52:01.350931   48612 command_runner.go:130] >     {
	I0422 17:52:01.350938   48612 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0422 17:52:01.350943   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.350948   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0422 17:52:01.350952   48612 command_runner.go:130] >       ],
	I0422 17:52:01.350956   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.350964   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0422 17:52:01.350970   48612 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0422 17:52:01.350974   48612 command_runner.go:130] >       ],
	I0422 17:52:01.350979   48612 command_runner.go:130] >       "size": "65291810",
	I0422 17:52:01.350983   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.350994   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351009   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351018   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351024   48612 command_runner.go:130] >     },
	I0422 17:52:01.351029   48612 command_runner.go:130] >     {
	I0422 17:52:01.351039   48612 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0422 17:52:01.351056   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351064   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0422 17:52:01.351068   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351073   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351080   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0422 17:52:01.351094   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0422 17:52:01.351100   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351104   48612 command_runner.go:130] >       "size": "1363676",
	I0422 17:52:01.351108   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.351116   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351141   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351151   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351156   48612 command_runner.go:130] >     },
	I0422 17:52:01.351162   48612 command_runner.go:130] >     {
	I0422 17:52:01.351174   48612 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0422 17:52:01.351184   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351191   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0422 17:52:01.351198   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351202   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351209   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0422 17:52:01.351219   48612 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0422 17:52:01.351225   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351229   48612 command_runner.go:130] >       "size": "31470524",
	I0422 17:52:01.351233   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.351241   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351247   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351257   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351263   48612 command_runner.go:130] >     },
	I0422 17:52:01.351268   48612 command_runner.go:130] >     {
	I0422 17:52:01.351282   48612 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0422 17:52:01.351292   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351303   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0422 17:52:01.351307   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351311   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351321   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0422 17:52:01.351339   48612 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0422 17:52:01.351348   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351355   48612 command_runner.go:130] >       "size": "61245718",
	I0422 17:52:01.351365   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.351372   48612 command_runner.go:130] >       "username": "nonroot",
	I0422 17:52:01.351385   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351402   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351410   48612 command_runner.go:130] >     },
	I0422 17:52:01.351415   48612 command_runner.go:130] >     {
	I0422 17:52:01.351427   48612 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0422 17:52:01.351434   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351438   48612 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0422 17:52:01.351445   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351451   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351472   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0422 17:52:01.351490   48612 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0422 17:52:01.351495   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351503   48612 command_runner.go:130] >       "size": "150779692",
	I0422 17:52:01.351513   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.351523   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.351531   48612 command_runner.go:130] >       },
	I0422 17:52:01.351541   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351551   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351561   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351568   48612 command_runner.go:130] >     },
	I0422 17:52:01.351575   48612 command_runner.go:130] >     {
	I0422 17:52:01.351585   48612 command_runner.go:130] >       "id": "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0",
	I0422 17:52:01.351595   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351607   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.0"
	I0422 17:52:01.351616   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351625   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351641   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81",
	I0422 17:52:01.351655   48612 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"
	I0422 17:52:01.351664   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351671   48612 command_runner.go:130] >       "size": "117609952",
	I0422 17:52:01.351675   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.351685   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.351694   48612 command_runner.go:130] >       },
	I0422 17:52:01.351702   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351712   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351721   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351727   48612 command_runner.go:130] >     },
	I0422 17:52:01.351744   48612 command_runner.go:130] >     {
	I0422 17:52:01.351757   48612 command_runner.go:130] >       "id": "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b",
	I0422 17:52:01.351765   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351771   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.0"
	I0422 17:52:01.351779   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351785   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351800   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe",
	I0422 17:52:01.351817   48612 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"
	I0422 17:52:01.351828   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351839   48612 command_runner.go:130] >       "size": "112170310",
	I0422 17:52:01.351846   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.351854   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.351862   48612 command_runner.go:130] >       },
	I0422 17:52:01.351866   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.351873   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.351880   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.351889   48612 command_runner.go:130] >     },
	I0422 17:52:01.351894   48612 command_runner.go:130] >     {
	I0422 17:52:01.351905   48612 command_runner.go:130] >       "id": "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b",
	I0422 17:52:01.351914   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.351922   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.0"
	I0422 17:52:01.351931   48612 command_runner.go:130] >       ],
	I0422 17:52:01.351938   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.351965   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68",
	I0422 17:52:01.351980   48612 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"
	I0422 17:52:01.351989   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352001   48612 command_runner.go:130] >       "size": "85932953",
	I0422 17:52:01.352011   48612 command_runner.go:130] >       "uid": null,
	I0422 17:52:01.352020   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.352030   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.352039   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.352047   48612 command_runner.go:130] >     },
	I0422 17:52:01.352056   48612 command_runner.go:130] >     {
	I0422 17:52:01.352064   48612 command_runner.go:130] >       "id": "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced",
	I0422 17:52:01.352073   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.352081   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.0"
	I0422 17:52:01.352097   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352110   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.352120   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67",
	I0422 17:52:01.352131   48612 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"
	I0422 17:52:01.352137   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352144   48612 command_runner.go:130] >       "size": "63026502",
	I0422 17:52:01.352150   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.352158   48612 command_runner.go:130] >         "value": "0"
	I0422 17:52:01.352164   48612 command_runner.go:130] >       },
	I0422 17:52:01.352175   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.352181   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.352189   48612 command_runner.go:130] >       "pinned": false
	I0422 17:52:01.352195   48612 command_runner.go:130] >     },
	I0422 17:52:01.352204   48612 command_runner.go:130] >     {
	I0422 17:52:01.352215   48612 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0422 17:52:01.352224   48612 command_runner.go:130] >       "repoTags": [
	I0422 17:52:01.352231   48612 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0422 17:52:01.352240   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352247   48612 command_runner.go:130] >       "repoDigests": [
	I0422 17:52:01.352261   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0422 17:52:01.352278   48612 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0422 17:52:01.352287   48612 command_runner.go:130] >       ],
	I0422 17:52:01.352294   48612 command_runner.go:130] >       "size": "750414",
	I0422 17:52:01.352303   48612 command_runner.go:130] >       "uid": {
	I0422 17:52:01.352310   48612 command_runner.go:130] >         "value": "65535"
	I0422 17:52:01.352319   48612 command_runner.go:130] >       },
	I0422 17:52:01.352326   48612 command_runner.go:130] >       "username": "",
	I0422 17:52:01.352336   48612 command_runner.go:130] >       "spec": null,
	I0422 17:52:01.352342   48612 command_runner.go:130] >       "pinned": true
	I0422 17:52:01.352351   48612 command_runner.go:130] >     }
	I0422 17:52:01.352356   48612 command_runner.go:130] >   ]
	I0422 17:52:01.352364   48612 command_runner.go:130] > }
	I0422 17:52:01.352516   48612 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 17:52:01.352529   48612 cache_images.go:84] Images are preloaded, skipping loading
	I0422 17:52:01.352536   48612 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.30.0 crio true true} ...
	I0422 17:52:01.352679   48612 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-704531 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 17:52:01.352766   48612 ssh_runner.go:195] Run: crio config
	I0422 17:52:01.386076   48612 command_runner.go:130] ! time="2024-04-22 17:52:01.367649942Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0422 17:52:01.391722   48612 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0422 17:52:01.398066   48612 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0422 17:52:01.398099   48612 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0422 17:52:01.398106   48612 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0422 17:52:01.398110   48612 command_runner.go:130] > #
	I0422 17:52:01.398118   48612 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0422 17:52:01.398128   48612 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0422 17:52:01.398138   48612 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0422 17:52:01.398154   48612 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0422 17:52:01.398163   48612 command_runner.go:130] > # reload'.
	I0422 17:52:01.398170   48612 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0422 17:52:01.398179   48612 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0422 17:52:01.398186   48612 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0422 17:52:01.398193   48612 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0422 17:52:01.398199   48612 command_runner.go:130] > [crio]
	I0422 17:52:01.398209   48612 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0422 17:52:01.398216   48612 command_runner.go:130] > # containers images, in this directory.
	I0422 17:52:01.398227   48612 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0422 17:52:01.398244   48612 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0422 17:52:01.398254   48612 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0422 17:52:01.398270   48612 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0422 17:52:01.398275   48612 command_runner.go:130] > # imagestore = ""
	I0422 17:52:01.398283   48612 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0422 17:52:01.398289   48612 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0422 17:52:01.398297   48612 command_runner.go:130] > storage_driver = "overlay"
	I0422 17:52:01.398307   48612 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0422 17:52:01.398320   48612 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0422 17:52:01.398329   48612 command_runner.go:130] > storage_option = [
	I0422 17:52:01.398337   48612 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0422 17:52:01.398345   48612 command_runner.go:130] > ]
	I0422 17:52:01.398355   48612 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0422 17:52:01.398368   48612 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0422 17:52:01.398375   48612 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0422 17:52:01.398383   48612 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0422 17:52:01.398404   48612 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0422 17:52:01.398415   48612 command_runner.go:130] > # always happen on a node reboot
	I0422 17:52:01.398426   48612 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0422 17:52:01.398444   48612 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0422 17:52:01.398457   48612 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0422 17:52:01.398466   48612 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0422 17:52:01.398473   48612 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0422 17:52:01.398489   48612 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0422 17:52:01.398505   48612 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0422 17:52:01.398515   48612 command_runner.go:130] > # internal_wipe = true
	I0422 17:52:01.398531   48612 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0422 17:52:01.398543   48612 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0422 17:52:01.398550   48612 command_runner.go:130] > # internal_repair = false
	I0422 17:52:01.398556   48612 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0422 17:52:01.398569   48612 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0422 17:52:01.398581   48612 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0422 17:52:01.398593   48612 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0422 17:52:01.398608   48612 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0422 17:52:01.398616   48612 command_runner.go:130] > [crio.api]
	I0422 17:52:01.398628   48612 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0422 17:52:01.398636   48612 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0422 17:52:01.398645   48612 command_runner.go:130] > # IP address on which the stream server will listen.
	I0422 17:52:01.398656   48612 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0422 17:52:01.398670   48612 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0422 17:52:01.398681   48612 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0422 17:52:01.398690   48612 command_runner.go:130] > # stream_port = "0"
	I0422 17:52:01.398702   48612 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0422 17:52:01.398712   48612 command_runner.go:130] > # stream_enable_tls = false
	I0422 17:52:01.398722   48612 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0422 17:52:01.398730   48612 command_runner.go:130] > # stream_idle_timeout = ""
	I0422 17:52:01.398744   48612 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0422 17:52:01.398757   48612 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0422 17:52:01.398765   48612 command_runner.go:130] > # minutes.
	I0422 17:52:01.398774   48612 command_runner.go:130] > # stream_tls_cert = ""
	I0422 17:52:01.398786   48612 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0422 17:52:01.398798   48612 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0422 17:52:01.398811   48612 command_runner.go:130] > # stream_tls_key = ""
	I0422 17:52:01.398824   48612 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0422 17:52:01.398838   48612 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0422 17:52:01.398869   48612 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0422 17:52:01.398879   48612 command_runner.go:130] > # stream_tls_ca = ""
	I0422 17:52:01.398892   48612 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 17:52:01.398899   48612 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0422 17:52:01.398910   48612 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0422 17:52:01.398923   48612 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0422 17:52:01.398937   48612 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0422 17:52:01.398949   48612 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0422 17:52:01.398958   48612 command_runner.go:130] > [crio.runtime]
	I0422 17:52:01.398975   48612 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0422 17:52:01.398983   48612 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0422 17:52:01.398988   48612 command_runner.go:130] > # "nofile=1024:2048"
	I0422 17:52:01.399002   48612 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0422 17:52:01.399012   48612 command_runner.go:130] > # default_ulimits = [
	I0422 17:52:01.399020   48612 command_runner.go:130] > # ]
	I0422 17:52:01.399031   48612 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0422 17:52:01.399041   48612 command_runner.go:130] > # no_pivot = false
	I0422 17:52:01.399056   48612 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0422 17:52:01.399065   48612 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0422 17:52:01.399075   48612 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0422 17:52:01.399086   48612 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0422 17:52:01.399098   48612 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0422 17:52:01.399112   48612 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 17:52:01.399132   48612 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0422 17:52:01.399144   48612 command_runner.go:130] > # Cgroup setting for conmon
	I0422 17:52:01.399155   48612 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0422 17:52:01.399163   48612 command_runner.go:130] > conmon_cgroup = "pod"
	I0422 17:52:01.399177   48612 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0422 17:52:01.399188   48612 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0422 17:52:01.399197   48612 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0422 17:52:01.399206   48612 command_runner.go:130] > conmon_env = [
	I0422 17:52:01.399219   48612 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 17:52:01.399228   48612 command_runner.go:130] > ]
	I0422 17:52:01.399246   48612 command_runner.go:130] > # Additional environment variables to set for all the
	I0422 17:52:01.399257   48612 command_runner.go:130] > # containers. These are overridden if set in the
	I0422 17:52:01.399265   48612 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0422 17:52:01.399274   48612 command_runner.go:130] > # default_env = [
	I0422 17:52:01.399279   48612 command_runner.go:130] > # ]
	I0422 17:52:01.399288   48612 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0422 17:52:01.399307   48612 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0422 17:52:01.399316   48612 command_runner.go:130] > # selinux = false
	I0422 17:52:01.399326   48612 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0422 17:52:01.399340   48612 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0422 17:52:01.399353   48612 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0422 17:52:01.399362   48612 command_runner.go:130] > # seccomp_profile = ""
	I0422 17:52:01.399374   48612 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0422 17:52:01.399385   48612 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0422 17:52:01.399398   48612 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0422 17:52:01.399409   48612 command_runner.go:130] > # which might increase security.
	I0422 17:52:01.399415   48612 command_runner.go:130] > # This option is currently deprecated,
	I0422 17:52:01.399428   48612 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0422 17:52:01.399438   48612 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0422 17:52:01.399451   48612 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0422 17:52:01.399461   48612 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0422 17:52:01.399474   48612 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0422 17:52:01.399488   48612 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0422 17:52:01.399500   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.399510   48612 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0422 17:52:01.399521   48612 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0422 17:52:01.399531   48612 command_runner.go:130] > # the cgroup blockio controller.
	I0422 17:52:01.399540   48612 command_runner.go:130] > # blockio_config_file = ""
	I0422 17:52:01.399551   48612 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0422 17:52:01.399560   48612 command_runner.go:130] > # blockio parameters.
	I0422 17:52:01.399570   48612 command_runner.go:130] > # blockio_reload = false
	I0422 17:52:01.399584   48612 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0422 17:52:01.399594   48612 command_runner.go:130] > # irqbalance daemon.
	I0422 17:52:01.399605   48612 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0422 17:52:01.399617   48612 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0422 17:52:01.399629   48612 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0422 17:52:01.399645   48612 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0422 17:52:01.399658   48612 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0422 17:52:01.399672   48612 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0422 17:52:01.399683   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.399692   48612 command_runner.go:130] > # rdt_config_file = ""
	I0422 17:52:01.399703   48612 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0422 17:52:01.399713   48612 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0422 17:52:01.399755   48612 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0422 17:52:01.399767   48612 command_runner.go:130] > # separate_pull_cgroup = ""
	I0422 17:52:01.399778   48612 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0422 17:52:01.399788   48612 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0422 17:52:01.399799   48612 command_runner.go:130] > # will be added.
	I0422 17:52:01.399807   48612 command_runner.go:130] > # default_capabilities = [
	I0422 17:52:01.399811   48612 command_runner.go:130] > # 	"CHOWN",
	I0422 17:52:01.399817   48612 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0422 17:52:01.399827   48612 command_runner.go:130] > # 	"FSETID",
	I0422 17:52:01.399836   48612 command_runner.go:130] > # 	"FOWNER",
	I0422 17:52:01.399845   48612 command_runner.go:130] > # 	"SETGID",
	I0422 17:52:01.399851   48612 command_runner.go:130] > # 	"SETUID",
	I0422 17:52:01.399855   48612 command_runner.go:130] > # 	"SETPCAP",
	I0422 17:52:01.399865   48612 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0422 17:52:01.399873   48612 command_runner.go:130] > # 	"KILL",
	I0422 17:52:01.399882   48612 command_runner.go:130] > # ]
	I0422 17:52:01.399891   48612 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0422 17:52:01.399901   48612 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0422 17:52:01.399916   48612 command_runner.go:130] > # add_inheritable_capabilities = false
	I0422 17:52:01.399929   48612 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0422 17:52:01.399942   48612 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 17:52:01.399951   48612 command_runner.go:130] > default_sysctls = [
	I0422 17:52:01.399962   48612 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0422 17:52:01.399974   48612 command_runner.go:130] > ]
	I0422 17:52:01.399981   48612 command_runner.go:130] > # List of devices on the host that a
	I0422 17:52:01.399991   48612 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0422 17:52:01.400001   48612 command_runner.go:130] > # allowed_devices = [
	I0422 17:52:01.400010   48612 command_runner.go:130] > # 	"/dev/fuse",
	I0422 17:52:01.400018   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400035   48612 command_runner.go:130] > # List of additional devices. specified as
	I0422 17:52:01.400050   48612 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0422 17:52:01.400060   48612 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0422 17:52:01.400068   48612 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0422 17:52:01.400077   48612 command_runner.go:130] > # additional_devices = [
	I0422 17:52:01.400087   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400099   48612 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0422 17:52:01.400106   48612 command_runner.go:130] > # cdi_spec_dirs = [
	I0422 17:52:01.400114   48612 command_runner.go:130] > # 	"/etc/cdi",
	I0422 17:52:01.400123   48612 command_runner.go:130] > # 	"/var/run/cdi",
	I0422 17:52:01.400131   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400141   48612 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0422 17:52:01.400151   48612 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0422 17:52:01.400158   48612 command_runner.go:130] > # Defaults to false.
	I0422 17:52:01.400170   48612 command_runner.go:130] > # device_ownership_from_security_context = false
	I0422 17:52:01.400183   48612 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0422 17:52:01.400196   48612 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0422 17:52:01.400205   48612 command_runner.go:130] > # hooks_dir = [
	I0422 17:52:01.400215   48612 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0422 17:52:01.400223   48612 command_runner.go:130] > # ]
	I0422 17:52:01.400232   48612 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0422 17:52:01.400241   48612 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0422 17:52:01.400248   48612 command_runner.go:130] > # its default mounts from the following two files:
	I0422 17:52:01.400257   48612 command_runner.go:130] > #
	I0422 17:52:01.400268   48612 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0422 17:52:01.400281   48612 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0422 17:52:01.400290   48612 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0422 17:52:01.400298   48612 command_runner.go:130] > #
	I0422 17:52:01.400308   48612 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0422 17:52:01.400320   48612 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0422 17:52:01.400330   48612 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0422 17:52:01.400341   48612 command_runner.go:130] > #      only add mounts it finds in this file.
	I0422 17:52:01.400349   48612 command_runner.go:130] > #
	I0422 17:52:01.400356   48612 command_runner.go:130] > # default_mounts_file = ""
	I0422 17:52:01.400367   48612 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0422 17:52:01.400377   48612 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0422 17:52:01.400482   48612 command_runner.go:130] > pids_limit = 1024
	I0422 17:52:01.400506   48612 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0422 17:52:01.400516   48612 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0422 17:52:01.400527   48612 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0422 17:52:01.400543   48612 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0422 17:52:01.400554   48612 command_runner.go:130] > # log_size_max = -1
	I0422 17:52:01.400571   48612 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0422 17:52:01.400581   48612 command_runner.go:130] > # log_to_journald = false
	I0422 17:52:01.400599   48612 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0422 17:52:01.400608   48612 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0422 17:52:01.400617   48612 command_runner.go:130] > # Path to directory for container attach sockets.
	I0422 17:52:01.400628   48612 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0422 17:52:01.400640   48612 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0422 17:52:01.400650   48612 command_runner.go:130] > # bind_mount_prefix = ""
	I0422 17:52:01.400662   48612 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0422 17:52:01.400671   48612 command_runner.go:130] > # read_only = false
	I0422 17:52:01.400686   48612 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0422 17:52:01.400760   48612 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0422 17:52:01.400783   48612 command_runner.go:130] > # live configuration reload.
	I0422 17:52:01.400793   48612 command_runner.go:130] > # log_level = "info"
	I0422 17:52:01.400805   48612 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0422 17:52:01.400816   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.400825   48612 command_runner.go:130] > # log_filter = ""
	I0422 17:52:01.400838   48612 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0422 17:52:01.400854   48612 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0422 17:52:01.400862   48612 command_runner.go:130] > # separated by comma.
	I0422 17:52:01.400883   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.400892   48612 command_runner.go:130] > # uid_mappings = ""
	I0422 17:52:01.400898   48612 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0422 17:52:01.400912   48612 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0422 17:52:01.400926   48612 command_runner.go:130] > # separated by comma.
	I0422 17:52:01.400941   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.400958   48612 command_runner.go:130] > # gid_mappings = ""
	I0422 17:52:01.400971   48612 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0422 17:52:01.400981   48612 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 17:52:01.400989   48612 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 17:52:01.401014   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.401025   48612 command_runner.go:130] > # minimum_mappable_uid = -1
	I0422 17:52:01.401038   48612 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0422 17:52:01.401051   48612 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0422 17:52:01.401062   48612 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0422 17:52:01.401074   48612 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0422 17:52:01.401084   48612 command_runner.go:130] > # minimum_mappable_gid = -1
	I0422 17:52:01.401097   48612 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0422 17:52:01.401110   48612 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0422 17:52:01.401122   48612 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0422 17:52:01.401132   48612 command_runner.go:130] > # ctr_stop_timeout = 30
	I0422 17:52:01.401144   48612 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0422 17:52:01.401153   48612 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0422 17:52:01.401160   48612 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0422 17:52:01.401172   48612 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0422 17:52:01.401181   48612 command_runner.go:130] > drop_infra_ctr = false
	I0422 17:52:01.401194   48612 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0422 17:52:01.401207   48612 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0422 17:52:01.401221   48612 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0422 17:52:01.401230   48612 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0422 17:52:01.401237   48612 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0422 17:52:01.401249   48612 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0422 17:52:01.401259   48612 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0422 17:52:01.401270   48612 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0422 17:52:01.401280   48612 command_runner.go:130] > # shared_cpuset = ""
	I0422 17:52:01.401290   48612 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0422 17:52:01.401301   48612 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0422 17:52:01.401308   48612 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0422 17:52:01.401320   48612 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0422 17:52:01.401327   48612 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0422 17:52:01.401336   48612 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0422 17:52:01.401353   48612 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0422 17:52:01.401364   48612 command_runner.go:130] > # enable_criu_support = false
	I0422 17:52:01.401375   48612 command_runner.go:130] > # Enable/disable the generation of the container,
	I0422 17:52:01.401387   48612 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0422 17:52:01.401397   48612 command_runner.go:130] > # enable_pod_events = false
	I0422 17:52:01.401412   48612 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 17:52:01.401425   48612 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0422 17:52:01.401438   48612 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0422 17:52:01.401448   48612 command_runner.go:130] > # default_runtime = "runc"
	I0422 17:52:01.401459   48612 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0422 17:52:01.401471   48612 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0422 17:52:01.401488   48612 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0422 17:52:01.401496   48612 command_runner.go:130] > # creation as a file is not desired either.
	I0422 17:52:01.401508   48612 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0422 17:52:01.401519   48612 command_runner.go:130] > # the hostname is being managed dynamically.
	I0422 17:52:01.401530   48612 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0422 17:52:01.401539   48612 command_runner.go:130] > # ]
	I0422 17:52:01.401551   48612 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0422 17:52:01.401564   48612 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0422 17:52:01.401575   48612 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0422 17:52:01.401583   48612 command_runner.go:130] > # Each entry in the table should follow the format:
	I0422 17:52:01.401589   48612 command_runner.go:130] > #
	I0422 17:52:01.401600   48612 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0422 17:52:01.401616   48612 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0422 17:52:01.401677   48612 command_runner.go:130] > # runtime_type = "oci"
	I0422 17:52:01.401688   48612 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0422 17:52:01.401701   48612 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0422 17:52:01.401710   48612 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0422 17:52:01.401720   48612 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0422 17:52:01.401729   48612 command_runner.go:130] > # monitor_env = []
	I0422 17:52:01.401740   48612 command_runner.go:130] > # privileged_without_host_devices = false
	I0422 17:52:01.401749   48612 command_runner.go:130] > # allowed_annotations = []
	I0422 17:52:01.401757   48612 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0422 17:52:01.401765   48612 command_runner.go:130] > # Where:
	I0422 17:52:01.401777   48612 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0422 17:52:01.401791   48612 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0422 17:52:01.401804   48612 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0422 17:52:01.401817   48612 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0422 17:52:01.401828   48612 command_runner.go:130] > #   in $PATH.
	I0422 17:52:01.401838   48612 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0422 17:52:01.401847   48612 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0422 17:52:01.401864   48612 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0422 17:52:01.401874   48612 command_runner.go:130] > #   state.
	I0422 17:52:01.401887   48612 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0422 17:52:01.401898   48612 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0422 17:52:01.401911   48612 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0422 17:52:01.401924   48612 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0422 17:52:01.401935   48612 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0422 17:52:01.401948   48612 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0422 17:52:01.401959   48612 command_runner.go:130] > #   The currently recognized values are:
	I0422 17:52:01.401973   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0422 17:52:01.401987   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0422 17:52:01.401998   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0422 17:52:01.402008   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0422 17:52:01.402020   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0422 17:52:01.402034   48612 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0422 17:52:01.402048   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0422 17:52:01.402073   48612 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0422 17:52:01.402085   48612 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0422 17:52:01.402096   48612 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0422 17:52:01.402103   48612 command_runner.go:130] > #   deprecated option "conmon".
	I0422 17:52:01.402118   48612 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0422 17:52:01.402129   48612 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0422 17:52:01.402144   48612 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0422 17:52:01.402155   48612 command_runner.go:130] > #   should be moved to the container's cgroup
	I0422 17:52:01.402168   48612 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0422 17:52:01.402178   48612 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0422 17:52:01.402187   48612 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0422 17:52:01.402198   48612 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0422 17:52:01.402207   48612 command_runner.go:130] > #
	I0422 17:52:01.402217   48612 command_runner.go:130] > # Using the seccomp notifier feature:
	I0422 17:52:01.402226   48612 command_runner.go:130] > #
	I0422 17:52:01.402238   48612 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0422 17:52:01.402249   48612 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0422 17:52:01.402257   48612 command_runner.go:130] > #
	I0422 17:52:01.402265   48612 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0422 17:52:01.402273   48612 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0422 17:52:01.402286   48612 command_runner.go:130] > #
	I0422 17:52:01.402301   48612 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0422 17:52:01.402310   48612 command_runner.go:130] > # feature.
	I0422 17:52:01.402315   48612 command_runner.go:130] > #
	I0422 17:52:01.402328   48612 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0422 17:52:01.402341   48612 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0422 17:52:01.402357   48612 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0422 17:52:01.402366   48612 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0422 17:52:01.402374   48612 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0422 17:52:01.402379   48612 command_runner.go:130] > #
	I0422 17:52:01.402389   48612 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0422 17:52:01.402402   48612 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0422 17:52:01.402416   48612 command_runner.go:130] > #
	I0422 17:52:01.402432   48612 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0422 17:52:01.402444   48612 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0422 17:52:01.402452   48612 command_runner.go:130] > #
	I0422 17:52:01.402461   48612 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0422 17:52:01.402471   48612 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0422 17:52:01.402477   48612 command_runner.go:130] > # limitation.
	I0422 17:52:01.402482   48612 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0422 17:52:01.402488   48612 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0422 17:52:01.402492   48612 command_runner.go:130] > runtime_type = "oci"
	I0422 17:52:01.402498   48612 command_runner.go:130] > runtime_root = "/run/runc"
	I0422 17:52:01.402503   48612 command_runner.go:130] > runtime_config_path = ""
	I0422 17:52:01.402509   48612 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0422 17:52:01.402514   48612 command_runner.go:130] > monitor_cgroup = "pod"
	I0422 17:52:01.402520   48612 command_runner.go:130] > monitor_exec_cgroup = ""
	I0422 17:52:01.402524   48612 command_runner.go:130] > monitor_env = [
	I0422 17:52:01.402536   48612 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0422 17:52:01.402544   48612 command_runner.go:130] > ]
	I0422 17:52:01.402553   48612 command_runner.go:130] > privileged_without_host_devices = false
	I0422 17:52:01.402566   48612 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0422 17:52:01.402578   48612 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0422 17:52:01.402590   48612 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0422 17:52:01.402604   48612 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0422 17:52:01.402616   48612 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0422 17:52:01.402629   48612 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0422 17:52:01.402640   48612 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0422 17:52:01.402649   48612 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0422 17:52:01.402657   48612 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0422 17:52:01.402664   48612 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0422 17:52:01.402670   48612 command_runner.go:130] > # Example:
	I0422 17:52:01.402675   48612 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0422 17:52:01.402682   48612 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0422 17:52:01.402687   48612 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0422 17:52:01.402693   48612 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0422 17:52:01.402696   48612 command_runner.go:130] > # cpuset = 0
	I0422 17:52:01.402703   48612 command_runner.go:130] > # cpushares = "0-1"
	I0422 17:52:01.402706   48612 command_runner.go:130] > # Where:
	I0422 17:52:01.402713   48612 command_runner.go:130] > # The workload name is workload-type.
	I0422 17:52:01.402720   48612 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0422 17:52:01.402727   48612 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0422 17:52:01.402733   48612 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0422 17:52:01.402744   48612 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0422 17:52:01.402753   48612 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0422 17:52:01.402762   48612 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0422 17:52:01.402777   48612 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0422 17:52:01.402785   48612 command_runner.go:130] > # Default value is set to true
	I0422 17:52:01.402790   48612 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0422 17:52:01.402797   48612 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0422 17:52:01.402804   48612 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0422 17:52:01.402808   48612 command_runner.go:130] > # Default value is set to 'false'
	I0422 17:52:01.402815   48612 command_runner.go:130] > # disable_hostport_mapping = false
	I0422 17:52:01.402821   48612 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0422 17:52:01.402825   48612 command_runner.go:130] > #
	I0422 17:52:01.402830   48612 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0422 17:52:01.402836   48612 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0422 17:52:01.402841   48612 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0422 17:52:01.402847   48612 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0422 17:52:01.402855   48612 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0422 17:52:01.402858   48612 command_runner.go:130] > [crio.image]
	I0422 17:52:01.402863   48612 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0422 17:52:01.402872   48612 command_runner.go:130] > # default_transport = "docker://"
	I0422 17:52:01.402877   48612 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0422 17:52:01.402883   48612 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0422 17:52:01.402886   48612 command_runner.go:130] > # global_auth_file = ""
	I0422 17:52:01.402891   48612 command_runner.go:130] > # The image used to instantiate infra containers.
	I0422 17:52:01.402895   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.402899   48612 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0422 17:52:01.402905   48612 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0422 17:52:01.402910   48612 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0422 17:52:01.402918   48612 command_runner.go:130] > # This option supports live configuration reload.
	I0422 17:52:01.402922   48612 command_runner.go:130] > # pause_image_auth_file = ""
	I0422 17:52:01.402927   48612 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0422 17:52:01.402932   48612 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0422 17:52:01.402937   48612 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0422 17:52:01.402943   48612 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0422 17:52:01.402947   48612 command_runner.go:130] > # pause_command = "/pause"
	I0422 17:52:01.402952   48612 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0422 17:52:01.402957   48612 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0422 17:52:01.402963   48612 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0422 17:52:01.402969   48612 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0422 17:52:01.402975   48612 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0422 17:52:01.402980   48612 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0422 17:52:01.402983   48612 command_runner.go:130] > # pinned_images = [
	I0422 17:52:01.402986   48612 command_runner.go:130] > # ]
	I0422 17:52:01.402992   48612 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0422 17:52:01.402997   48612 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0422 17:52:01.403003   48612 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0422 17:52:01.403012   48612 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0422 17:52:01.403017   48612 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0422 17:52:01.403024   48612 command_runner.go:130] > # signature_policy = ""
	I0422 17:52:01.403029   48612 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0422 17:52:01.403038   48612 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0422 17:52:01.403046   48612 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0422 17:52:01.403056   48612 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0422 17:52:01.403062   48612 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0422 17:52:01.403069   48612 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0422 17:52:01.403079   48612 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0422 17:52:01.403087   48612 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0422 17:52:01.403093   48612 command_runner.go:130] > # changing them here.
	I0422 17:52:01.403097   48612 command_runner.go:130] > # insecure_registries = [
	I0422 17:52:01.403103   48612 command_runner.go:130] > # ]
	I0422 17:52:01.403109   48612 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0422 17:52:01.403116   48612 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0422 17:52:01.403150   48612 command_runner.go:130] > # image_volumes = "mkdir"
	I0422 17:52:01.403161   48612 command_runner.go:130] > # Temporary directory to use for storing big files
	I0422 17:52:01.403166   48612 command_runner.go:130] > # big_files_temporary_dir = ""
	I0422 17:52:01.403174   48612 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0422 17:52:01.403180   48612 command_runner.go:130] > # CNI plugins.
	I0422 17:52:01.403184   48612 command_runner.go:130] > [crio.network]
	I0422 17:52:01.403192   48612 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0422 17:52:01.403200   48612 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0422 17:52:01.403205   48612 command_runner.go:130] > # cni_default_network = ""
	I0422 17:52:01.403210   48612 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0422 17:52:01.403218   48612 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0422 17:52:01.403223   48612 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0422 17:52:01.403229   48612 command_runner.go:130] > # plugin_dirs = [
	I0422 17:52:01.403233   48612 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0422 17:52:01.403239   48612 command_runner.go:130] > # ]
	I0422 17:52:01.403244   48612 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0422 17:52:01.403250   48612 command_runner.go:130] > [crio.metrics]
	I0422 17:52:01.403255   48612 command_runner.go:130] > # Globally enable or disable metrics support.
	I0422 17:52:01.403261   48612 command_runner.go:130] > enable_metrics = true
	I0422 17:52:01.403266   48612 command_runner.go:130] > # Specify enabled metrics collectors.
	I0422 17:52:01.403272   48612 command_runner.go:130] > # Per default all metrics are enabled.
	I0422 17:52:01.403278   48612 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0422 17:52:01.403287   48612 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0422 17:52:01.403295   48612 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0422 17:52:01.403300   48612 command_runner.go:130] > # metrics_collectors = [
	I0422 17:52:01.403304   48612 command_runner.go:130] > # 	"operations",
	I0422 17:52:01.403311   48612 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0422 17:52:01.403316   48612 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0422 17:52:01.403322   48612 command_runner.go:130] > # 	"operations_errors",
	I0422 17:52:01.403333   48612 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0422 17:52:01.403339   48612 command_runner.go:130] > # 	"image_pulls_by_name",
	I0422 17:52:01.403344   48612 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0422 17:52:01.403349   48612 command_runner.go:130] > # 	"image_pulls_failures",
	I0422 17:52:01.403356   48612 command_runner.go:130] > # 	"image_pulls_successes",
	I0422 17:52:01.403360   48612 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0422 17:52:01.403367   48612 command_runner.go:130] > # 	"image_layer_reuse",
	I0422 17:52:01.403371   48612 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0422 17:52:01.403377   48612 command_runner.go:130] > # 	"containers_oom_total",
	I0422 17:52:01.403381   48612 command_runner.go:130] > # 	"containers_oom",
	I0422 17:52:01.403387   48612 command_runner.go:130] > # 	"processes_defunct",
	I0422 17:52:01.403390   48612 command_runner.go:130] > # 	"operations_total",
	I0422 17:52:01.403397   48612 command_runner.go:130] > # 	"operations_latency_seconds",
	I0422 17:52:01.403401   48612 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0422 17:52:01.403408   48612 command_runner.go:130] > # 	"operations_errors_total",
	I0422 17:52:01.403412   48612 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0422 17:52:01.403422   48612 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0422 17:52:01.403429   48612 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0422 17:52:01.403434   48612 command_runner.go:130] > # 	"image_pulls_success_total",
	I0422 17:52:01.403440   48612 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0422 17:52:01.403445   48612 command_runner.go:130] > # 	"containers_oom_count_total",
	I0422 17:52:01.403451   48612 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0422 17:52:01.403456   48612 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0422 17:52:01.403461   48612 command_runner.go:130] > # ]
	I0422 17:52:01.403466   48612 command_runner.go:130] > # The port on which the metrics server will listen.
	I0422 17:52:01.403472   48612 command_runner.go:130] > # metrics_port = 9090
	I0422 17:52:01.403477   48612 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0422 17:52:01.403484   48612 command_runner.go:130] > # metrics_socket = ""
	I0422 17:52:01.403489   48612 command_runner.go:130] > # The certificate for the secure metrics server.
	I0422 17:52:01.403497   48612 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0422 17:52:01.403505   48612 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0422 17:52:01.403510   48612 command_runner.go:130] > # certificate on any modification event.
	I0422 17:52:01.403516   48612 command_runner.go:130] > # metrics_cert = ""
	I0422 17:52:01.403521   48612 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0422 17:52:01.403527   48612 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0422 17:52:01.403531   48612 command_runner.go:130] > # metrics_key = ""
	I0422 17:52:01.403541   48612 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0422 17:52:01.403548   48612 command_runner.go:130] > [crio.tracing]
	I0422 17:52:01.403553   48612 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0422 17:52:01.403560   48612 command_runner.go:130] > # enable_tracing = false
	I0422 17:52:01.403565   48612 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0422 17:52:01.403571   48612 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0422 17:52:01.403578   48612 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0422 17:52:01.403585   48612 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0422 17:52:01.403589   48612 command_runner.go:130] > # CRI-O NRI configuration.
	I0422 17:52:01.403595   48612 command_runner.go:130] > [crio.nri]
	I0422 17:52:01.403600   48612 command_runner.go:130] > # Globally enable or disable NRI.
	I0422 17:52:01.403606   48612 command_runner.go:130] > # enable_nri = false
	I0422 17:52:01.403612   48612 command_runner.go:130] > # NRI socket to listen on.
	I0422 17:52:01.403619   48612 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0422 17:52:01.403624   48612 command_runner.go:130] > # NRI plugin directory to use.
	I0422 17:52:01.403630   48612 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0422 17:52:01.403635   48612 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0422 17:52:01.403642   48612 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0422 17:52:01.403647   48612 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0422 17:52:01.403654   48612 command_runner.go:130] > # nri_disable_connections = false
	I0422 17:52:01.403659   48612 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0422 17:52:01.403666   48612 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0422 17:52:01.403671   48612 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0422 17:52:01.403678   48612 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0422 17:52:01.403684   48612 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0422 17:52:01.403689   48612 command_runner.go:130] > [crio.stats]
	I0422 17:52:01.403695   48612 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0422 17:52:01.403702   48612 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0422 17:52:01.403706   48612 command_runner.go:130] > # stats_collection_period = 0
	I0422 17:52:01.403855   48612 cni.go:84] Creating CNI manager for ""
	I0422 17:52:01.403872   48612 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0422 17:52:01.403885   48612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 17:52:01.403905   48612 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-704531 NodeName:multinode-704531 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 17:52:01.404045   48612 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-704531"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 17:52:01.404104   48612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 17:52:01.414740   48612 command_runner.go:130] > kubeadm
	I0422 17:52:01.414774   48612 command_runner.go:130] > kubectl
	I0422 17:52:01.414778   48612 command_runner.go:130] > kubelet
	I0422 17:52:01.414808   48612 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 17:52:01.414863   48612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 17:52:01.426012   48612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0422 17:52:01.444484   48612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 17:52:01.462231   48612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0422 17:52:01.480152   48612 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I0422 17:52:01.484240   48612 command_runner.go:130] > 192.168.39.41	control-plane.minikube.internal
	I0422 17:52:01.484468   48612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 17:52:01.621815   48612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 17:52:01.636511   48612 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531 for IP: 192.168.39.41
	I0422 17:52:01.636536   48612 certs.go:194] generating shared ca certs ...
	I0422 17:52:01.636551   48612 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 17:52:01.636714   48612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 17:52:01.636754   48612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 17:52:01.636764   48612 certs.go:256] generating profile certs ...
	I0422 17:52:01.636837   48612 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/client.key
	I0422 17:52:01.636903   48612 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.key.5a12d634
	I0422 17:52:01.636943   48612 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.key
	I0422 17:52:01.636954   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0422 17:52:01.636974   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0422 17:52:01.636986   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0422 17:52:01.636998   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0422 17:52:01.637007   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0422 17:52:01.637020   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0422 17:52:01.637032   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0422 17:52:01.637043   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0422 17:52:01.637090   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 17:52:01.637120   48612 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 17:52:01.637130   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 17:52:01.637156   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 17:52:01.637179   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 17:52:01.637199   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 17:52:01.637231   48612 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 17:52:01.637260   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> /usr/share/ca-certificates/188842.pem
	I0422 17:52:01.637273   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:01.637285   48612 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem -> /usr/share/ca-certificates/18884.pem
	I0422 17:52:01.637843   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 17:52:01.663697   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 17:52:01.688909   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 17:52:01.713919   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 17:52:01.739491   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 17:52:01.764219   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 17:52:01.815597   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 17:52:01.888912   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/multinode-704531/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 17:52:01.928613   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 17:52:01.973988   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 17:52:02.003832   48612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 17:52:02.043020   48612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 17:52:02.060436   48612 ssh_runner.go:195] Run: openssl version
	I0422 17:52:02.073718   48612 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0422 17:52:02.073810   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 17:52:02.091779   48612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.101183   48612 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.101217   48612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.101262   48612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 17:52:02.108883   48612 command_runner.go:130] > b5213941
	I0422 17:52:02.109205   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 17:52:02.129187   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 17:52:02.151362   48612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.157338   48612 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.157461   48612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.157522   48612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 17:52:02.163464   48612 command_runner.go:130] > 51391683
	I0422 17:52:02.163745   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 17:52:02.173642   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 17:52:02.185279   48612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.189972   48612 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.190067   48612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.190128   48612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 17:52:02.196137   48612 command_runner.go:130] > 3ec20f2e
	I0422 17:52:02.196205   48612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 17:52:02.208424   48612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:52:02.213152   48612 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 17:52:02.213174   48612 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0422 17:52:02.213179   48612 command_runner.go:130] > Device: 253,1	Inode: 5245462     Links: 1
	I0422 17:52:02.213185   48612 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0422 17:52:02.213191   48612 command_runner.go:130] > Access: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213196   48612 command_runner.go:130] > Modify: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213200   48612 command_runner.go:130] > Change: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213205   48612 command_runner.go:130] >  Birth: 2024-04-22 17:45:10.645068654 +0000
	I0422 17:52:02.213254   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 17:52:02.219264   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.219321   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 17:52:02.225324   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.225516   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 17:52:02.231597   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.231663   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 17:52:02.237589   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.237635   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 17:52:02.243516   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.243585   48612 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 17:52:02.249138   48612 command_runner.go:130] > Certificate will not expire
	I0422 17:52:02.249327   48612 kubeadm.go:391] StartCluster: {Name:multinode-704531 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-704531 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.19 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.141 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:52:02.249445   48612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 17:52:02.249488   48612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 17:52:02.293252   48612 command_runner.go:130] > 8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b
	I0422 17:52:02.293283   48612 command_runner.go:130] > 5a941051f7430fa5546b0dc808e18736747e102e993cdd06453fda10c2cc8aa8
	I0422 17:52:02.293292   48612 command_runner.go:130] > 2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf
	I0422 17:52:02.293300   48612 command_runner.go:130] > 0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc
	I0422 17:52:02.293334   48612 command_runner.go:130] > cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9
	I0422 17:52:02.293461   48612 command_runner.go:130] > d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c
	I0422 17:52:02.293512   48612 command_runner.go:130] > 70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd
	I0422 17:52:02.293530   48612 command_runner.go:130] > 04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238
	I0422 17:52:02.293581   48612 command_runner.go:130] > 809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227
	I0422 17:52:02.295148   48612 cri.go:89] found id: "8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b"
	I0422 17:52:02.295165   48612 cri.go:89] found id: "5a941051f7430fa5546b0dc808e18736747e102e993cdd06453fda10c2cc8aa8"
	I0422 17:52:02.295169   48612 cri.go:89] found id: "2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf"
	I0422 17:52:02.295173   48612 cri.go:89] found id: "0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc"
	I0422 17:52:02.295175   48612 cri.go:89] found id: "cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9"
	I0422 17:52:02.295179   48612 cri.go:89] found id: "d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c"
	I0422 17:52:02.295188   48612 cri.go:89] found id: "70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd"
	I0422 17:52:02.295191   48612 cri.go:89] found id: "04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238"
	I0422 17:52:02.295193   48612 cri.go:89] found id: "809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227"
	I0422 17:52:02.295198   48612 cri.go:89] found id: ""
	I0422 17:52:02.295235   48612 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.737307486Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808553737280178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5cca687-66c9-4219-8a61-b2c89b2cff8b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.737933228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cfce6d3-6969-4602-9c5f-481d860f3662 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.738012231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cfce6d3-6969-4602-9c5f-481d860f3662 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.739247345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cfce6d3-6969-4602-9c5f-481d860f3662 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.788678606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e947f69f-6cd6-4ca1-8e77-e5fba8c2790e name=/runtime.v1.RuntimeService/Version
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.788826901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e947f69f-6cd6-4ca1-8e77-e5fba8c2790e name=/runtime.v1.RuntimeService/Version
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.790466837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=732c883b-09a9-49d5-9e95-700b45464f2a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.791240875Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808553791150427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=732c883b-09a9-49d5-9e95-700b45464f2a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.791761645Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce3c9214-d3ad-43b0-a661-1fe0dd465d8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.791905085Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce3c9214-d3ad-43b0-a661-1fe0dd465d8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.792381063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ce3c9214-d3ad-43b0-a661-1fe0dd465d8e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.837078101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e22c3094-a6e9-496f-83db-1847605f5f68 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.837390898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e22c3094-a6e9-496f-83db-1847605f5f68 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.838612677Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a2dc7d2-7f00-4b85-830d-063db3818339 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.839007169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808553838986450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a2dc7d2-7f00-4b85-830d-063db3818339 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.839501541Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60d73793-8776-4672-ba22-4e793726d757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.839556281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60d73793-8776-4672-ba22-4e793726d757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.839903730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60d73793-8776-4672-ba22-4e793726d757 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.886993574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8432bcc-2fdc-4162-a662-75fe080e5027 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.887130173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8432bcc-2fdc-4162-a662-75fe080e5027 name=/runtime.v1.RuntimeService/Version
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.888386650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b39043f-fb88-405e-b21e-5ab17d83b2bb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.888766872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808553888745834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133243,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b39043f-fb88-405e-b21e-5ab17d83b2bb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.889605178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8c3025f-69ac-43c6-8538-cfea8ae9c7db name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.889839354Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8c3025f-69ac-43c6-8538-cfea8ae9c7db name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 17:55:53 multinode-704531 crio[2847]: time="2024-04-22 17:55:53.890618390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5994c70bb640eeefbf6bddb3a45663a163e4766ba0f88fc051a9d52f5d30fff,PodSandboxId:822725eae35bbae22584fa5310cd746e4618697b2ff758ba67b48d194e0209fb,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713808361891093075,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713808335199277880,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839,PodSandboxId:36a833c84c15aaf62c61783b0bbacc7acbba88149748ed92d476f84ae1c807dd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713808328895033601,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-ac15
8419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7162d60f78fe971253a623d037dce15b31fa18e8c4cf35a6b9777873b4f3f08b,PodSandboxId:a495ae5921972befa485aa59249982c4274a387a8321da025ac539d0a45b6edc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808328349807297,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},An
notations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb,PodSandboxId:1ff9ab84630ac0422422196db4f13121f5a4a5d64517aa694fa5e3250b96110a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713808328507313887,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kub
ernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d,PodSandboxId:01bc44ac6ed3718583757d88102e6af088855a08b694be653231c4b7d72c5ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713808328269911988,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]
string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861,PodSandboxId:0086062aaf9f2c4ea6ce150325eaa9eaf90ff9d27b3017f1732b9d6ee98c0843,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713808328334827483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kube
rnetes.container.hash: 77cf3021,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795,PodSandboxId:f16dc9b042a41336af4c957b19b1fb7319b46f8b4b500dffaf023203d6d15836,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713808328237689198,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.contain
er.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf,PodSandboxId:4579f0a9d1a17d6d9cf41b50c7d5f272fbab0f849b2f6e9048b44cd571eec9d8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713808328179901767,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{io.kubernetes.container.hash: fc381e49,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b,PodSandboxId:09758ae980f2669a7e67dda8d688dbbce544fd8a66904b385b3271204345067b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713808322026573835,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b9mkg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cc78a82-30c4-4a6d-8d0d-a214aa9e40b4,},Annotations:map[string]string{io.kubernetes.container.hash: a4ff794d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb9117e7d64a8253d5e2ee23309263d027af705b46a57b4fd1fd5051834bc86b,PodSandboxId:a4e447cc6f932d38cfc5b8b68594a3ba6bd483e01b9039f816d24731ab44fe0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713808013761246284,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-bl7n4,io
.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e8c1f2b8-194c-4567-93d8-77a38ede22cc,},Annotations:map[string]string{io.kubernetes.container.hash: a4dbc2d2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2524aeec685e8ec367aa4691bb07f4a72302f64c413c95a00555a865736cdfcf,PodSandboxId:7d0ec5828f1fca0d9b875f7b951fa9051dbe9c42ae38d06ba86f06586e7b0500,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713807966107071187,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 74c83b5d-e7bf-46d9-bf28-be78b4e89874,},Annotations:map[string]string{io.kubernetes.container.hash: 9144e52f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc,PodSandboxId:6b27f46bb9e6eee91cbd3afbfc692abbadddb9989484d05137dc0d2605bd8ca8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713807934554517743,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fpnzz,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9555b728-9998-4aa2-8c3c-5fb759a4b19f,},Annotations:map[string]string{io.kubernetes.container.hash: 9e1a558e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9,PodSandboxId:66b4007bd8f4727741ae6122f57724ef2dd8c7e892653479fd3dc2f56ab92ca7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713807934229081913,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-brdh6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b111ab97-6b54-4006-bc09-
ac158419ceb0,},Annotations:map[string]string{io.kubernetes.container.hash: 5250df2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd,PodSandboxId:b8e008b39b0318e9c70bfd3c2ad26ff85a679302a82e8dfb952f0b4b2d80b066,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713807914701295945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3430ee01f0bb1d67323a9ff296b867,},Annotations:map[string]string{i
o.kubernetes.container.hash: fc381e49,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c,PodSandboxId:702623bbb0434dde5b0d50f1b9bfd4f2233268f8abe226b126c139ee1bbf033c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713807914726410666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb29928136c8425bfab46a4e157ddb8,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238,PodSandboxId:a0df83489b2a0d58fa168e5758d8f70a7f93a3a26b325825c9cc3d60768f0f5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713807914700034523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95b7daaeea1c41dfd355334ee34a430c,},Annotations:map[string]string{io.k
ubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227,PodSandboxId:cd7cdc5dbff4a87d9b4bd1f964a50d1ba316e6b8d58fc7e1de9da07b7571f72f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713807914629308567,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-704531,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fff3f898b6674ab09e5d63885ec6b689,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 77cf3021,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8c3025f-69ac-43c6-8538-cfea8ae9c7db name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b5994c70bb640       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   822725eae35bb       busybox-fc5497c4f-bl7n4
	cb2973be4400b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   09758ae980f26       coredns-7db6d8ff4d-b9mkg
	f1930947df2f1       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      3 minutes ago       Running             kube-proxy                1                   36a833c84c15a       kube-proxy-brdh6
	ac77b416192fe       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   1ff9ab84630ac       kindnet-fpnzz
	7162d60f78fe9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   a495ae5921972       storage-provisioner
	98c3a104dcf75       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      3 minutes ago       Running             kube-apiserver            1                   0086062aaf9f2       kube-apiserver-multinode-704531
	720e239dd404f       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      3 minutes ago       Running             kube-controller-manager   1                   01bc44ac6ed37       kube-controller-manager-multinode-704531
	d229100cdd1f9       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      3 minutes ago       Running             kube-scheduler            1                   f16dc9b042a41       kube-scheduler-multinode-704531
	0c4c78f87155c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   4579f0a9d1a17       etcd-multinode-704531
	8e0db90880f97       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Exited              coredns                   1                   09758ae980f26       coredns-7db6d8ff4d-b9mkg
	cb9117e7d64a8       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   a4e447cc6f932       busybox-fc5497c4f-bl7n4
	2524aeec685e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   7d0ec5828f1fc       storage-provisioner
	0f76791d387a0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      10 minutes ago      Exited              kindnet-cni               0                   6b27f46bb9e6e       kindnet-fpnzz
	cc22cce807d1e       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b                                      10 minutes ago      Exited              kube-proxy                0                   66b4007bd8f47       kube-proxy-brdh6
	d49dfeca2d9d4       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced                                      10 minutes ago      Exited              kube-scheduler            0                   702623bbb0434       kube-scheduler-multinode-704531
	70d0eea95ffd4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   b8e008b39b031       etcd-multinode-704531
	04c12d47455f2       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b                                      10 minutes ago      Exited              kube-controller-manager   0                   a0df83489b2a0       kube-controller-manager-multinode-704531
	809aa2caf411e       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0                                      10 minutes ago      Exited              kube-apiserver            0                   cd7cdc5dbff4a       kube-apiserver-multinode-704531
	
	
	==> coredns [8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:43162 - 46839 "HINFO IN 851568339466806155.2057316636928923104. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.007270821s
	
	
	==> coredns [cb2973be4400b825505b02053450bf5652532a01ab4362488cb69acef32c310b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56281 - 56367 "HINFO IN 4100348979122285734.3932258701465694796. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010155518s
	
	
	==> describe nodes <==
	Name:               multinode-704531
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-704531
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=multinode-704531
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_45_21_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:45:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-704531
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:55:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:45:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:45:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:45:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:52:24 +0000   Mon, 22 Apr 2024 17:52:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    multinode-704531
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c8791a2e0d64a44bf77be520827dcc7
	  System UUID:                7c8791a2-e0d6-4a44-bf77-be520827dcc7
	  Boot ID:                    6cf024ff-2a11-43e2-a21f-9b6e95f58a47
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bl7n4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	  kube-system                 coredns-7db6d8ff4d-b9mkg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-704531                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-fpnzz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-704531             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-704531    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-brdh6                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-704531             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 3m42s              kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node multinode-704531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node multinode-704531 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node multinode-704531 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                kubelet          Node multinode-704531 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node multinode-704531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node multinode-704531 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                node-controller  Node multinode-704531 event: Registered Node multinode-704531 in Controller
	  Normal  NodeReady                9m49s              kubelet          Node multinode-704531 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    3m41s              kubelet          Node multinode-704531 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 3m41s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m41s              kubelet          Node multinode-704531 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     3m41s              kubelet          Node multinode-704531 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m40s              kubelet          Node multinode-704531 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m40s              kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s              node-controller  Node multinode-704531 event: Registered Node multinode-704531 in Controller
	  Normal  NodeReady                3m30s              kubelet          Node multinode-704531 status is now: NodeReady
	
	
	Name:               multinode-704531-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-704531-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=multinode-704531
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_22T17_52_48_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:52:48 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-704531-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:53:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:54:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:54:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:54:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Apr 2024 17:53:18 +0000   Mon, 22 Apr 2024 17:54:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    multinode-704531-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 27c436d8b3ce4832a2682e4f6df119a2
	  System UUID:                27c436d8-b3ce-4832-a268-2e4f6df119a2
	  Boot ID:                    d93f299e-08cc-426d-a738-6784ee94be7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xppng    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-qtksj              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m16s
	  kube-system                 kube-proxy-pdfj9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  Starting                 9m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m16s (x3 over 9m16s)  kubelet          Node multinode-704531-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s (x3 over 9m16s)  kubelet          Node multinode-704531-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s (x3 over 9m16s)  kubelet          Node multinode-704531-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m6s                   kubelet          Node multinode-704531-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m6s)    kubelet          Node multinode-704531-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m6s)    kubelet          Node multinode-704531-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m6s)    kubelet          Node multinode-704531-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s                  kubelet          Node multinode-704531-m02 status is now: NodeReady
	  Normal  NodeNotReady             105s                   node-controller  Node multinode-704531-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.170755] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.152802] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.309710] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.586925] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.059983] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.647642] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +0.365284] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.194570] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.080365] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.679981] systemd-fstab-generator[1477]: Ignoring "noauto" option for root device
	[  +0.096222] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 17:46] kauditd_printk_skb: 60 callbacks suppressed
	[ +45.265896] kauditd_printk_skb: 12 callbacks suppressed
	[Apr22 17:51] systemd-fstab-generator[2768]: Ignoring "noauto" option for root device
	[  +0.155412] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +0.172746] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.148128] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.280151] systemd-fstab-generator[2834]: Ignoring "noauto" option for root device
	[Apr22 17:52] systemd-fstab-generator[2931]: Ignoring "noauto" option for root device
	[  +0.083106] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.428699] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.488416] systemd-fstab-generator[3811]: Ignoring "noauto" option for root device
	[  +0.094207] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.007370] systemd-fstab-generator[3930]: Ignoring "noauto" option for root device
	[  +8.089790] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [0c4c78f87155c13e03446c3a6757c374cb2a1c51ce505677a60d96bc374a84bf] <==
	{"level":"info","ts":"2024-04-22T17:52:08.713125Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T17:52:08.713136Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T17:52:08.713421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 switched to configuration voters=(10393760029520308295)"}
	{"level":"info","ts":"2024-04-22T17:52:08.713509Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","added-peer-id":"903e0dada8362847","added-peer-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-04-22T17:52:08.713638Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T17:52:08.713683Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T17:52:08.720971Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T17:52:08.722739Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"903e0dada8362847","initial-advertise-peer-urls":["https://192.168.39.41:2380"],"listen-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.41:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T17:52:08.72304Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T17:52:08.723325Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:52:08.726293Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:52:10.273973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T17:52:10.274033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T17:52:10.274076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-04-22T17:52:10.27409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.274096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.274104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.274114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 3"}
	{"level":"info","ts":"2024-04-22T17:52:10.275726Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:multinode-704531 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T17:52:10.275816Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T17:52:10.275883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T17:52:10.277252Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T17:52:10.277301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T17:52:10.277957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	{"level":"info","ts":"2024-04-22T17:52:10.2795Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [70d0eea95ffd46c8feb41c340270ea9119ed12a2dadd62bb3d9daa2996604acd] <==
	{"level":"info","ts":"2024-04-22T17:45:15.772324Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T17:45:15.773235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T17:45:15.776638Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T17:46:38.382401Z","caller":"traceutil/trace.go:171","msg":"trace[1575186716] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"252.751837ms","start":"2024-04-22T17:46:38.129617Z","end":"2024-04-22T17:46:38.382369Z","steps":["trace[1575186716] 'process raft request'  (duration: 246.716948ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:46:42.397093Z","caller":"traceutil/trace.go:171","msg":"trace[204413692] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:554; }","duration":"140.270643ms","start":"2024-04-22T17:46:42.256789Z","end":"2024-04-22T17:46:42.397059Z","steps":["trace[204413692] 'read index received'  (duration: 140.099653ms)","trace[204413692] 'applied index is now lower than readState.Index'  (duration: 170.389µs)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:46:42.397403Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.542392ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-704531-m02\" ","response":"range_response_count:1 size:2823"}
	{"level":"info","ts":"2024-04-22T17:46:42.397484Z","caller":"traceutil/trace.go:171","msg":"trace[2046346196] range","detail":"{range_begin:/registry/minions/multinode-704531-m02; range_end:; response_count:1; response_revision:526; }","duration":"140.70518ms","start":"2024-04-22T17:46:42.256765Z","end":"2024-04-22T17:46:42.39747Z","steps":["trace[2046346196] 'agreement among raft nodes before linearized reading'  (duration: 140.491818ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:46:42.397624Z","caller":"traceutil/trace.go:171","msg":"trace[1512357095] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"181.704772ms","start":"2024-04-22T17:46:42.215912Z","end":"2024-04-22T17:46:42.397617Z","steps":["trace[1512357095] 'process raft request'  (duration: 181.030703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T17:47:25.14486Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.760642ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2902445744713118401 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-704531-m03.17c8ab56300b9afc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-704531-m03.17c8ab56300b9afc\" value_size:646 lease:2902445744713117999 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T17:47:25.145013Z","caller":"traceutil/trace.go:171","msg":"trace[291727974] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"163.75079ms","start":"2024-04-22T17:47:24.981245Z","end":"2024-04-22T17:47:25.144996Z","steps":["trace[291727974] 'process raft request'  (duration: 163.704588ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:47:25.14523Z","caller":"traceutil/trace.go:171","msg":"trace[802930937] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"246.157952ms","start":"2024-04-22T17:47:24.899063Z","end":"2024-04-22T17:47:25.145221Z","steps":["trace[802930937] 'process raft request'  (duration: 143.300326ms)","trace[802930937] 'compare'  (duration: 101.665474ms)"],"step_count":2}
	{"level":"info","ts":"2024-04-22T17:47:25.145418Z","caller":"traceutil/trace.go:171","msg":"trace[396834445] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:660; }","duration":"197.340143ms","start":"2024-04-22T17:47:24.948071Z","end":"2024-04-22T17:47:25.145411Z","steps":["trace[396834445] 'read index received'  (duration: 94.302241ms)","trace[396834445] 'applied index is now lower than readState.Index'  (duration: 103.036513ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T17:47:25.145622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.538166ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-704531-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-04-22T17:47:25.145667Z","caller":"traceutil/trace.go:171","msg":"trace[580240549] range","detail":"{range_begin:/registry/minions/multinode-704531-m03; range_end:; response_count:1; response_revision:623; }","duration":"197.593375ms","start":"2024-04-22T17:47:24.948066Z","end":"2024-04-22T17:47:25.145659Z","steps":["trace[580240549] 'agreement among raft nodes before linearized reading'  (duration: 197.489587ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:47:28.923508Z","caller":"traceutil/trace.go:171","msg":"trace[281588394] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"135.745117ms","start":"2024-04-22T17:47:28.787743Z","end":"2024-04-22T17:47:28.923488Z","steps":["trace[281588394] 'process raft request'  (duration: 135.583568ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T17:50:20.825479Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T17:50:20.825605Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-704531","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"]}
	{"level":"warn","ts":"2024-04-22T17:50:20.825778Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:50:20.82586Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:50:20.869218Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T17:50:20.869285Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T17:50:20.869352Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"903e0dada8362847","current-leader-member-id":"903e0dada8362847"}
	{"level":"info","ts":"2024-04-22T17:50:20.875465Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:50:20.875603Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-22T17:50:20.875615Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-704531","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"]}
	
	
	==> kernel <==
	 17:55:54 up 11 min,  0 users,  load average: 0.07, 0.16, 0.10
	Linux multinode-704531 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0f76791d387a08edd5025ebb3d77e80eb1ddfa3a52e35f15cc3150503d8e97cc] <==
	I0422 17:49:35.543258       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:49:45.550953       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:49:45.550998       1 main.go:227] handling current node
	I0422 17:49:45.551019       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:49:45.551032       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:49:45.551142       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:49:45.551223       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:49:55.564056       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:49:55.564107       1 main.go:227] handling current node
	I0422 17:49:55.564118       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:49:55.564131       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:49:55.564391       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:49:55.564422       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:50:05.572840       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:50:05.573043       1 main.go:227] handling current node
	I0422 17:50:05.573090       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:50:05.573112       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:50:05.573314       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:50:05.573350       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	I0422 17:50:15.578621       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:50:15.578711       1 main.go:227] handling current node
	I0422 17:50:15.578735       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:50:15.578753       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:50:15.578884       1 main.go:223] Handling node with IPs: map[192.168.39.141:{}]
	I0422 17:50:15.578910       1 main.go:250] Node multinode-704531-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ac77b416192feff83a78eeaad10c0b04c37ff25559ec1b95147988ee9546dfcb] <==
	I0422 17:54:51.903660       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:55:01.909108       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:55:01.909204       1 main.go:227] handling current node
	I0422 17:55:01.909236       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:55:01.909244       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:55:11.914701       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:55:11.914743       1 main.go:227] handling current node
	I0422 17:55:11.914758       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:55:11.914786       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:55:21.924908       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:55:21.924964       1 main.go:227] handling current node
	I0422 17:55:21.924976       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:55:21.924983       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:55:31.929667       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:55:31.929784       1 main.go:227] handling current node
	I0422 17:55:31.929808       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:55:31.929826       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:55:41.946762       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:55:41.946815       1 main.go:227] handling current node
	I0422 17:55:41.946834       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:55:41.946840       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	I0422 17:55:51.959334       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0422 17:55:51.959377       1 main.go:227] handling current node
	I0422 17:55:51.959389       1 main.go:223] Handling node with IPs: map[192.168.39.19:{}]
	I0422 17:55:51.959396       1 main.go:250] Node multinode-704531-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [809aa2caf411e9bf29f956412c4d0f4fed149f22833856b80ee88ca2e4a6a227] <==
	E0422 17:50:20.829551       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0422 17:50:20.841501       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.859114       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.859921       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860015       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860085       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860142       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860297       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.860572       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0422 17:50:20.861051       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0422 17:50:20.861492       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0422 17:50:20.861638       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0422 17:50:20.861820       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0422 17:50:20.861911       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.861987       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862054       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862117       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.859925       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862866       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862924       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.863011       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0422 17:50:20.863319       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	W0422 17:50:20.863365       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.862891       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 17:50:20.863408       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [98c3a104dcf753107b1a0832d5164c68654cab1f3435f36efe34e58b3ccf7861] <==
	I0422 17:52:11.620817       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 17:52:11.625810       1 aggregator.go:165] initial CRD sync complete...
	I0422 17:52:11.626019       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 17:52:11.626090       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 17:52:11.668307       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 17:52:11.671432       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 17:52:11.672702       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 17:52:11.675911       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 17:52:11.676003       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 17:52:11.676283       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 17:52:11.683972       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0422 17:52:11.688292       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0422 17:52:11.721119       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 17:52:11.724386       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 17:52:11.724407       1 policy_source.go:224] refreshing policies
	I0422 17:52:11.733009       1 cache.go:39] Caches are synced for autoregister controller
	I0422 17:52:11.756731       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 17:52:12.580263       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 17:52:14.515539       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 17:52:14.630237       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 17:52:14.642031       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 17:52:14.705854       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 17:52:14.717083       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 17:52:24.579540       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 17:52:24.687704       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [04c12d47455f22dc84e9a85c99dc5c2b7e4f6b87c283fc47f1ccb8a92e08c238] <==
	I0422 17:46:08.049094       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0422 17:46:38.432565       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m02\" does not exist"
	I0422 17:46:38.443525       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m02" podCIDRs=["10.244.1.0/24"]
	I0422 17:46:43.053854       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-704531-m02"
	I0422 17:46:48.491608       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:46:50.882528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.1656ms"
	I0422 17:46:50.933434       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.295014ms"
	I0422 17:46:50.933542       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.497µs"
	I0422 17:46:53.915248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.385563ms"
	I0422 17:46:53.915329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.133µs"
	I0422 17:46:54.234543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.376336ms"
	I0422 17:46:54.235128       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="227.631µs"
	I0422 17:47:25.147777       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m03\" does not exist"
	I0422 17:47:25.148014       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:47:25.214555       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m03" podCIDRs=["10.244.2.0/24"]
	I0422 17:47:28.072770       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-704531-m03"
	I0422 17:47:35.280364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:05.701989       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:06.847249       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m03\" does not exist"
	I0422 17:48:06.847357       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:06.873550       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m03" podCIDRs=["10.244.3.0/24"]
	I0422 17:48:15.020141       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:48:53.128333       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m03"
	I0422 17:48:53.193584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.083181ms"
	I0422 17:48:53.194153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.897µs"
	
	
	==> kube-controller-manager [720e239dd404f53d9061cb14c563c599483dfd4a63d69b6b23a4f4129bff295d] <==
	I0422 17:52:48.170098       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m02\" does not exist"
	I0422 17:52:48.180524       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m02" podCIDRs=["10.244.1.0/24"]
	I0422 17:52:50.062198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.327µs"
	I0422 17:52:50.112417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="92.161µs"
	I0422 17:52:50.124604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.323µs"
	I0422 17:52:50.138471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.205µs"
	I0422 17:52:50.147807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.392µs"
	I0422 17:52:50.151440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.87µs"
	I0422 17:52:57.280621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:52:57.304025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.844µs"
	I0422 17:52:57.325089       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.294µs"
	I0422 17:53:01.182981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.088833ms"
	I0422 17:53:01.190941       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.394565ms"
	I0422 17:53:16.519856       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:53:17.688343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:53:17.688436       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-704531-m03\" does not exist"
	I0422 17:53:17.718108       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-704531-m03" podCIDRs=["10.244.2.0/24"]
	I0422 17:53:26.830125       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:53:32.505902       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-704531-m02"
	I0422 17:54:09.826147       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.120286ms"
	I0422 17:54:09.826302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.892µs"
	I0422 17:54:24.681670       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tq7ss"
	I0422 17:54:24.712295       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-tq7ss"
	I0422 17:54:24.712346       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kr7f2"
	I0422 17:54:24.735331       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-kr7f2"
	
	
	==> kube-proxy [cc22cce807d1e361431b5ec27a04098aaf4ad4952b949f49e94ef6aa2d94b7c9] <==
	I0422 17:45:34.873585       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:45:34.973932       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	I0422 17:45:35.072662       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:45:35.072757       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:45:35.072785       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:45:35.075587       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:45:35.075816       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:45:35.076014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:45:35.077033       1 config.go:192] "Starting service config controller"
	I0422 17:45:35.077119       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:45:35.077338       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:45:35.077376       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:45:35.078102       1 config.go:319] "Starting node config controller"
	I0422 17:45:35.079287       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:45:35.178219       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 17:45:35.178301       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:45:35.179724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f1930947df2f1d755daa37b732213931238b0ba7186316432b34d21d1f279839] <==
	I0422 17:52:10.033837       1 server_linux.go:69] "Using iptables proxy"
	I0422 17:52:11.662083       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	I0422 17:52:11.737767       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 17:52:11.737832       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 17:52:11.737849       1 server_linux.go:165] "Using iptables Proxier"
	I0422 17:52:11.742695       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 17:52:11.742981       1 server.go:872] "Version info" version="v1.30.0"
	I0422 17:52:11.743014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 17:52:11.744692       1 config.go:192] "Starting service config controller"
	I0422 17:52:11.744728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 17:52:11.744752       1 config.go:101] "Starting endpoint slice config controller"
	I0422 17:52:11.744755       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 17:52:11.745115       1 config.go:319] "Starting node config controller"
	I0422 17:52:11.745147       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 17:52:11.845559       1 shared_informer.go:320] Caches are synced for node config
	I0422 17:52:11.845564       1 shared_informer.go:320] Caches are synced for service config
	I0422 17:52:11.845628       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d229100cdd1f99ebe047e3d61ad3d49d6ceb075b08bdfdeb6075528e21433795] <==
	W0422 17:52:11.637465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:52:11.637558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 17:52:11.637798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:52:11.637902       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 17:52:11.638037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:52:11.638113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:52:11.638437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.638473       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.638374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.638571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.643272       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:52:11.643383       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:52:11.643595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:52:11.643630       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:52:11.643938       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.644075       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.644108       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 17:52:11.644240       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 17:52:11.644254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:52:11.644268       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 17:52:11.644366       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:52:11.644453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:52:11.644482       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:52:11.644580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0422 17:52:12.614541       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d49dfeca2d9d434372948a5c47c038a3b06b2547336326b53daeed166e1f7a5c] <==
	E0422 17:45:18.403932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 17:45:18.420361       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:45:18.420456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 17:45:18.439957       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:45:18.440048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 17:45:18.523766       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 17:45:18.523900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 17:45:18.582840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:45:18.583036       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 17:45:18.673822       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 17:45:18.673937       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 17:45:18.721559       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:45:18.721705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 17:45:18.742956       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:45:18.743078       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 17:45:18.765709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 17:45:18.765827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 17:45:18.786274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 17:45:18.786386       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 17:45:18.907228       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:45:18.907283       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 17:45:18.907239       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:45:18.907307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0422 17:45:21.514549       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 17:50:20.837733       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938305    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/74c83b5d-e7bf-46d9-bf28-be78b4e89874-tmp\") pod \"storage-provisioner\" (UID: \"74c83b5d-e7bf-46d9-bf28-be78b4e89874\") " pod="kube-system/storage-provisioner"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938583    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b111ab97-6b54-4006-bc09-ac158419ceb0-xtables-lock\") pod \"kube-proxy-brdh6\" (UID: \"b111ab97-6b54-4006-bc09-ac158419ceb0\") " pod="kube-system/kube-proxy-brdh6"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938763    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9555b728-9998-4aa2-8c3c-5fb759a4b19f-cni-cfg\") pod \"kindnet-fpnzz\" (UID: \"9555b728-9998-4aa2-8c3c-5fb759a4b19f\") " pod="kube-system/kindnet-fpnzz"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.938880    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9555b728-9998-4aa2-8c3c-5fb759a4b19f-xtables-lock\") pod \"kindnet-fpnzz\" (UID: \"9555b728-9998-4aa2-8c3c-5fb759a4b19f\") " pod="kube-system/kindnet-fpnzz"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.939004    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b111ab97-6b54-4006-bc09-ac158419ceb0-lib-modules\") pod \"kube-proxy-brdh6\" (UID: \"b111ab97-6b54-4006-bc09-ac158419ceb0\") " pod="kube-system/kube-proxy-brdh6"
	Apr 22 17:52:14 multinode-704531 kubelet[3818]: I0422 17:52:14.939209    3818 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9555b728-9998-4aa2-8c3c-5fb759a4b19f-lib-modules\") pod \"kindnet-fpnzz\" (UID: \"9555b728-9998-4aa2-8c3c-5fb759a4b19f\") " pod="kube-system/kindnet-fpnzz"
	Apr 22 17:52:15 multinode-704531 kubelet[3818]: E0422 17:52:15.166100    3818 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-704531\" already exists" pod="kube-system/kube-apiserver-multinode-704531"
	Apr 22 17:52:15 multinode-704531 kubelet[3818]: I0422 17:52:15.170546    3818 scope.go:117] "RemoveContainer" containerID="8e0db90880f97b76aee8cb37ae7574f3e2a250c73d9a6ba747d7013b8df8214b"
	Apr 22 17:52:15 multinode-704531 kubelet[3818]: E0422 17:52:15.189867    3818 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-704531\" already exists" pod="kube-system/kube-controller-manager-multinode-704531"
	Apr 22 17:52:18 multinode-704531 kubelet[3818]: I0422 17:52:18.533624    3818 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 17:53:13 multinode-704531 kubelet[3818]: E0422 17:53:13.956737    3818 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:53:13 multinode-704531 kubelet[3818]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:54:13 multinode-704531 kubelet[3818]: E0422 17:54:13.955086    3818 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:54:13 multinode-704531 kubelet[3818]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:54:13 multinode-704531 kubelet[3818]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:54:13 multinode-704531 kubelet[3818]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:54:13 multinode-704531 kubelet[3818]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 17:55:13 multinode-704531 kubelet[3818]: E0422 17:55:13.955637    3818 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 17:55:13 multinode-704531 kubelet[3818]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 17:55:13 multinode-704531 kubelet[3818]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 17:55:13 multinode-704531 kubelet[3818]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 17:55:13 multinode-704531 kubelet[3818]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:55:53.440697   50473 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-704531 -n multinode-704531
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-704531 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.48s)

                                                
                                    
x
+
TestPreload (172.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-578761 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0422 17:59:50.951766   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 18:00:07.902369   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-578761 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m32.236088322s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-578761 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-578761 image pull gcr.io/k8s-minikube/busybox: (2.773389321s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-578761
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-578761: (7.319636068s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-578761 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0422 18:01:19.002420   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-578761 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.958459213s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-578761 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-04-22 18:02:26.070901698 +0000 UTC m=+3930.800515298
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-578761 -n test-preload-578761
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-578761 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-578761 logs -n 25: (1.115055538s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531 sudo cat                                       | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m03_multinode-704531.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt                       | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m02:/home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n                                                                 | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | multinode-704531-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-704531 ssh -n multinode-704531-m02 sudo cat                                   | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	|         | /home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-704531 node stop m03                                                          | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:47 UTC |
	| node    | multinode-704531 node start                                                             | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:47 UTC | 22 Apr 24 17:48 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:48 UTC |                     |
	| stop    | -p multinode-704531                                                                     | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:48 UTC |                     |
	| start   | -p multinode-704531                                                                     | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:50 UTC | 22 Apr 24 17:53 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC |                     |
	| node    | multinode-704531 node delete                                                            | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC | 22 Apr 24 17:53 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-704531 stop                                                                   | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:53 UTC |                     |
	| start   | -p multinode-704531                                                                     | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:55 UTC | 22 Apr 24 17:58 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-704531                                                                | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:58 UTC |                     |
	| start   | -p multinode-704531-m02                                                                 | multinode-704531-m02 | jenkins | v1.33.0 | 22 Apr 24 17:58 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-704531-m03                                                                 | multinode-704531-m03 | jenkins | v1.33.0 | 22 Apr 24 17:58 UTC | 22 Apr 24 17:59 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-704531                                                                 | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC |                     |
	| delete  | -p multinode-704531-m03                                                                 | multinode-704531-m03 | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 17:59 UTC |
	| delete  | -p multinode-704531                                                                     | multinode-704531     | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 17:59 UTC |
	| start   | -p test-preload-578761                                                                  | test-preload-578761  | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 18:01 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-578761 image pull                                                          | test-preload-578761  | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC | 22 Apr 24 18:01 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-578761                                                                  | test-preload-578761  | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC | 22 Apr 24 18:01 UTC |
	| start   | -p test-preload-578761                                                                  | test-preload-578761  | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC | 22 Apr 24 18:02 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-578761 image list                                                          | test-preload-578761  | jenkins | v1.33.0 | 22 Apr 24 18:02 UTC | 22 Apr 24 18:02 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:01:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:01:18.936103   52815 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:01:18.936260   52815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:01:18.936271   52815 out.go:304] Setting ErrFile to fd 2...
	I0422 18:01:18.936278   52815 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:01:18.936522   52815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:01:18.937063   52815 out.go:298] Setting JSON to false
	I0422 18:01:18.937989   52815 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6224,"bootTime":1713802655,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:01:18.938048   52815 start.go:139] virtualization: kvm guest
	I0422 18:01:18.940414   52815 out.go:177] * [test-preload-578761] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:01:18.941994   52815 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:01:18.941931   52815 notify.go:220] Checking for updates...
	I0422 18:01:18.943664   52815 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:01:18.945305   52815 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:01:18.946574   52815 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:01:18.948086   52815 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:01:18.949542   52815 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:01:18.951570   52815 config.go:182] Loaded profile config "test-preload-578761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0422 18:01:18.952002   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:01:18.952048   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:01:18.966543   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
	I0422 18:01:18.966903   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:01:18.967493   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:01:18.967526   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:01:18.967864   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:01:18.968057   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:18.970096   52815 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:01:18.971761   52815 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:01:18.972054   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:01:18.972089   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:01:18.986577   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0422 18:01:18.987070   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:01:18.987546   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:01:18.987570   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:01:18.987878   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:01:18.988076   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:19.023335   52815 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:01:19.024892   52815 start.go:297] selected driver: kvm2
	I0422 18:01:19.024905   52815 start.go:901] validating driver "kvm2" against &{Name:test-preload-578761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-578761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:01:19.025002   52815 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:01:19.025635   52815 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:01:19.025701   52815 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:01:19.040494   52815 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:01:19.040791   52815 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:01:19.040859   52815 cni.go:84] Creating CNI manager for ""
	I0422 18:01:19.040872   52815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:01:19.040920   52815 start.go:340] cluster config:
	{Name:test-preload-578761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-578761 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:01:19.041022   52815 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:01:19.042843   52815 out.go:177] * Starting "test-preload-578761" primary control-plane node in "test-preload-578761" cluster
	I0422 18:01:19.044495   52815 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0422 18:01:19.144269   52815 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0422 18:01:19.144335   52815 cache.go:56] Caching tarball of preloaded images
	I0422 18:01:19.144489   52815 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0422 18:01:19.146350   52815 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0422 18:01:19.147744   52815 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0422 18:01:19.243686   52815 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0422 18:01:30.753288   52815 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0422 18:01:30.753391   52815 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0422 18:01:31.597872   52815 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0422 18:01:31.597990   52815 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/config.json ...
	I0422 18:01:31.598204   52815 start.go:360] acquireMachinesLock for test-preload-578761: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:01:31.598262   52815 start.go:364] duration metric: took 36.01µs to acquireMachinesLock for "test-preload-578761"
	I0422 18:01:31.598277   52815 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:01:31.598285   52815 fix.go:54] fixHost starting: 
	I0422 18:01:31.598597   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:01:31.598637   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:01:31.612961   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0422 18:01:31.613375   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:01:31.613788   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:01:31.613810   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:01:31.614131   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:01:31.614306   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:31.614517   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetState
	I0422 18:01:31.616078   52815 fix.go:112] recreateIfNeeded on test-preload-578761: state=Stopped err=<nil>
	I0422 18:01:31.616117   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	W0422 18:01:31.616274   52815 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:01:31.618641   52815 out.go:177] * Restarting existing kvm2 VM for "test-preload-578761" ...
	I0422 18:01:31.620146   52815 main.go:141] libmachine: (test-preload-578761) Calling .Start
	I0422 18:01:31.620353   52815 main.go:141] libmachine: (test-preload-578761) Ensuring networks are active...
	I0422 18:01:31.621477   52815 main.go:141] libmachine: (test-preload-578761) Ensuring network default is active
	I0422 18:01:31.621814   52815 main.go:141] libmachine: (test-preload-578761) Ensuring network mk-test-preload-578761 is active
	I0422 18:01:31.622200   52815 main.go:141] libmachine: (test-preload-578761) Getting domain xml...
	I0422 18:01:31.622911   52815 main.go:141] libmachine: (test-preload-578761) Creating domain...
	I0422 18:01:32.805114   52815 main.go:141] libmachine: (test-preload-578761) Waiting to get IP...
	I0422 18:01:32.806113   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:32.806623   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:32.806685   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:32.806604   52900 retry.go:31] will retry after 218.954516ms: waiting for machine to come up
	I0422 18:01:33.027264   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:33.027765   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:33.027797   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:33.027705   52900 retry.go:31] will retry after 249.782696ms: waiting for machine to come up
	I0422 18:01:33.279274   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:33.279708   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:33.279732   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:33.279661   52900 retry.go:31] will retry after 316.610831ms: waiting for machine to come up
	I0422 18:01:33.598217   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:33.598872   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:33.598888   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:33.598814   52900 retry.go:31] will retry after 507.962783ms: waiting for machine to come up
	I0422 18:01:34.108542   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:34.108981   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:34.108997   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:34.108948   52900 retry.go:31] will retry after 521.513772ms: waiting for machine to come up
	I0422 18:01:34.631834   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:34.632154   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:34.632179   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:34.632135   52900 retry.go:31] will retry after 687.04073ms: waiting for machine to come up
	I0422 18:01:35.321106   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:35.321606   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:35.321633   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:35.321556   52900 retry.go:31] will retry after 1.143326316s: waiting for machine to come up
	I0422 18:01:36.466962   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:36.467370   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:36.467415   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:36.467324   52900 retry.go:31] will retry after 1.18883639s: waiting for machine to come up
	I0422 18:01:37.658258   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:37.658640   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:37.658667   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:37.658592   52900 retry.go:31] will retry after 1.138875776s: waiting for machine to come up
	I0422 18:01:38.798930   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:38.799396   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:38.799427   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:38.799358   52900 retry.go:31] will retry after 1.664776355s: waiting for machine to come up
	I0422 18:01:40.465394   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:40.465842   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:40.465871   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:40.465796   52900 retry.go:31] will retry after 2.158149015s: waiting for machine to come up
	I0422 18:01:42.625597   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:42.626021   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:42.626044   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:42.625986   52900 retry.go:31] will retry after 2.229263578s: waiting for machine to come up
	I0422 18:01:44.858349   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:44.858785   52815 main.go:141] libmachine: (test-preload-578761) DBG | unable to find current IP address of domain test-preload-578761 in network mk-test-preload-578761
	I0422 18:01:44.858811   52815 main.go:141] libmachine: (test-preload-578761) DBG | I0422 18:01:44.858740   52900 retry.go:31] will retry after 3.955798686s: waiting for machine to come up
	I0422 18:01:48.817231   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.817725   52815 main.go:141] libmachine: (test-preload-578761) Found IP for machine: 192.168.39.176
	I0422 18:01:48.817744   52815 main.go:141] libmachine: (test-preload-578761) Reserving static IP address...
	I0422 18:01:48.817762   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has current primary IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.818165   52815 main.go:141] libmachine: (test-preload-578761) Reserved static IP address: 192.168.39.176
	I0422 18:01:48.818187   52815 main.go:141] libmachine: (test-preload-578761) Waiting for SSH to be available...
	I0422 18:01:48.818209   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "test-preload-578761", mac: "52:54:00:47:de:4f", ip: "192.168.39.176"} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:48.818235   52815 main.go:141] libmachine: (test-preload-578761) DBG | skip adding static IP to network mk-test-preload-578761 - found existing host DHCP lease matching {name: "test-preload-578761", mac: "52:54:00:47:de:4f", ip: "192.168.39.176"}
	I0422 18:01:48.818248   52815 main.go:141] libmachine: (test-preload-578761) DBG | Getting to WaitForSSH function...
	I0422 18:01:48.820729   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.821036   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:48.821068   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.821172   52815 main.go:141] libmachine: (test-preload-578761) DBG | Using SSH client type: external
	I0422 18:01:48.821199   52815 main.go:141] libmachine: (test-preload-578761) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa (-rw-------)
	I0422 18:01:48.821233   52815 main.go:141] libmachine: (test-preload-578761) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:01:48.821246   52815 main.go:141] libmachine: (test-preload-578761) DBG | About to run SSH command:
	I0422 18:01:48.821260   52815 main.go:141] libmachine: (test-preload-578761) DBG | exit 0
	I0422 18:01:48.951153   52815 main.go:141] libmachine: (test-preload-578761) DBG | SSH cmd err, output: <nil>: 
	I0422 18:01:48.951534   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetConfigRaw
	I0422 18:01:48.952077   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetIP
	I0422 18:01:48.954378   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.954651   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:48.954681   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.954827   52815 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/config.json ...
	I0422 18:01:48.954995   52815 machine.go:94] provisionDockerMachine start ...
	I0422 18:01:48.955011   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:48.955276   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:48.957549   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.957853   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:48.957875   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:48.958052   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:48.958228   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:48.958399   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:48.958524   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:48.958728   52815 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:48.958985   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0422 18:01:48.959000   52815 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:01:49.071827   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:01:49.071861   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetMachineName
	I0422 18:01:49.072095   52815 buildroot.go:166] provisioning hostname "test-preload-578761"
	I0422 18:01:49.072124   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetMachineName
	I0422 18:01:49.072328   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.075293   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.075717   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.075748   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.075835   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:49.076049   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.076206   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.076354   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:49.076552   52815 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:49.076709   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0422 18:01:49.076722   52815 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-578761 && echo "test-preload-578761" | sudo tee /etc/hostname
	I0422 18:01:49.202309   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-578761
	
	I0422 18:01:49.202346   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.204933   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.205299   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.205328   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.205479   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:49.205655   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.205828   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.205957   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:49.206126   52815 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:49.206333   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0422 18:01:49.206352   52815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-578761' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-578761/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-578761' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:01:49.328958   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:01:49.328987   52815 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:01:49.329023   52815 buildroot.go:174] setting up certificates
	I0422 18:01:49.329030   52815 provision.go:84] configureAuth start
	I0422 18:01:49.329038   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetMachineName
	I0422 18:01:49.329286   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetIP
	I0422 18:01:49.332079   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.332441   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.332470   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.332633   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.335061   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.335415   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.335441   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.335582   52815 provision.go:143] copyHostCerts
	I0422 18:01:49.335631   52815 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:01:49.335643   52815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:01:49.335728   52815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:01:49.335852   52815 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:01:49.335871   52815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:01:49.335910   52815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:01:49.335987   52815 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:01:49.335998   52815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:01:49.336029   52815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:01:49.336098   52815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.test-preload-578761 san=[127.0.0.1 192.168.39.176 localhost minikube test-preload-578761]
	I0422 18:01:49.374172   52815 provision.go:177] copyRemoteCerts
	I0422 18:01:49.374234   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:01:49.374288   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.376861   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.377178   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.377204   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.377351   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:49.377540   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.377696   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:49.377823   52815 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa Username:docker}
	I0422 18:01:49.465972   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:01:49.490686   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0422 18:01:49.515998   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 18:01:49.539875   52815 provision.go:87] duration metric: took 210.832016ms to configureAuth
	I0422 18:01:49.539905   52815 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:01:49.540055   52815 config.go:182] Loaded profile config "test-preload-578761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0422 18:01:49.540116   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.542819   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.543197   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.543220   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.543379   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:49.543579   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.543726   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.543845   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:49.543992   52815 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:49.544150   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0422 18:01:49.544165   52815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:01:49.815273   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:01:49.815295   52815 machine.go:97] duration metric: took 860.289163ms to provisionDockerMachine
	I0422 18:01:49.815306   52815 start.go:293] postStartSetup for "test-preload-578761" (driver="kvm2")
	I0422 18:01:49.815333   52815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:01:49.815351   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:49.815685   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:01:49.815713   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.818276   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.818629   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.818658   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.818765   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:49.818959   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.819143   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:49.819288   52815 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa Username:docker}
	I0422 18:01:49.907079   52815 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:01:49.911598   52815 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:01:49.911626   52815 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:01:49.911689   52815 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:01:49.911779   52815 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:01:49.911869   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:01:49.922572   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:01:49.946648   52815 start.go:296] duration metric: took 131.317549ms for postStartSetup
	I0422 18:01:49.946689   52815 fix.go:56] duration metric: took 18.348404459s for fixHost
	I0422 18:01:49.946710   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:49.949315   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.949623   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:49.949662   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:49.949818   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:49.950009   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.950177   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:49.950310   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:49.950505   52815 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:49.950660   52815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I0422 18:01:49.950669   52815 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:01:50.064154   52815 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713808910.031917361
	
	I0422 18:01:50.064179   52815 fix.go:216] guest clock: 1713808910.031917361
	I0422 18:01:50.064189   52815 fix.go:229] Guest: 2024-04-22 18:01:50.031917361 +0000 UTC Remote: 2024-04-22 18:01:49.946693458 +0000 UTC m=+31.056748528 (delta=85.223903ms)
	I0422 18:01:50.064231   52815 fix.go:200] guest clock delta is within tolerance: 85.223903ms
	I0422 18:01:50.064240   52815 start.go:83] releasing machines lock for "test-preload-578761", held for 18.465967634s
	I0422 18:01:50.064263   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:50.064575   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetIP
	I0422 18:01:50.067142   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:50.067549   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:50.067585   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:50.067707   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:50.068200   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:50.068385   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:01:50.068474   52815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:01:50.068517   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:50.068559   52815 ssh_runner.go:195] Run: cat /version.json
	I0422 18:01:50.068575   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:01:50.070996   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:50.071366   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:50.071392   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:50.071467   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:50.071606   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:50.071794   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:50.071834   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:50.071866   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:50.071951   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:50.072044   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:01:50.072116   52815 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa Username:docker}
	I0422 18:01:50.072193   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:01:50.072335   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:01:50.072458   52815 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa Username:docker}
	I0422 18:01:50.152456   52815 ssh_runner.go:195] Run: systemctl --version
	I0422 18:01:50.191523   52815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:01:50.338239   52815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:01:50.346085   52815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:01:50.346198   52815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:01:50.363290   52815 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:01:50.363317   52815 start.go:494] detecting cgroup driver to use...
	I0422 18:01:50.363388   52815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:01:50.380243   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:01:50.395728   52815 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:01:50.395786   52815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:01:50.410747   52815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:01:50.426383   52815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:01:50.544283   52815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:01:50.711620   52815 docker.go:233] disabling docker service ...
	I0422 18:01:50.711693   52815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:01:50.726921   52815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:01:50.740882   52815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:01:50.878342   52815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:01:51.004439   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:01:51.018596   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:01:51.037312   52815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0422 18:01:51.037379   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.048698   52815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:01:51.048771   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.060068   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.071583   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.083260   52815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:01:51.095155   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.106827   52815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.124757   52815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:01:51.136648   52815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:01:51.147463   52815 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:01:51.147533   52815 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:01:51.162749   52815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:01:51.173520   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:01:51.297453   52815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:01:51.438021   52815 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:01:51.438079   52815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:01:51.443330   52815 start.go:562] Will wait 60s for crictl version
	I0422 18:01:51.443389   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:51.447341   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:01:51.482245   52815 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:01:51.482334   52815 ssh_runner.go:195] Run: crio --version
	I0422 18:01:51.511791   52815 ssh_runner.go:195] Run: crio --version
	I0422 18:01:51.543114   52815 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0422 18:01:51.544603   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetIP
	I0422 18:01:51.547301   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:51.547655   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:01:51.547675   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:01:51.547916   52815 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:01:51.552532   52815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:01:51.566151   52815 kubeadm.go:877] updating cluster {Name:test-preload-578761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-578761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:01:51.566278   52815 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0422 18:01:51.566339   52815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:01:51.604994   52815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0422 18:01:51.605053   52815 ssh_runner.go:195] Run: which lz4
	I0422 18:01:51.609364   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:01:51.613889   52815 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:01:51.613925   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0422 18:01:53.299041   52815 crio.go:462] duration metric: took 1.689699633s to copy over tarball
	I0422 18:01:53.299104   52815 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:01:55.727262   52815 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.428127926s)
	I0422 18:01:55.727309   52815 crio.go:469] duration metric: took 2.428241696s to extract the tarball
	I0422 18:01:55.727318   52815 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:01:55.769764   52815 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:01:55.816310   52815 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0422 18:01:55.816334   52815 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:01:55.816387   52815 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:01:55.816427   52815 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0422 18:01:55.816438   52815 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0422 18:01:55.816454   52815 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0422 18:01:55.816496   52815 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0422 18:01:55.816431   52815 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0422 18:01:55.816589   52815 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0422 18:01:55.816604   52815 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0422 18:01:55.817866   52815 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:01:55.817866   52815 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0422 18:01:55.817996   52815 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0422 18:01:55.818005   52815 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0422 18:01:55.818017   52815 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0422 18:01:55.818050   52815 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0422 18:01:55.818068   52815 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0422 18:01:55.818062   52815 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0422 18:01:56.022498   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0422 18:01:56.026475   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0422 18:01:56.074550   52815 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0422 18:01:56.074597   52815 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0422 18:01:56.074633   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:56.081017   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0422 18:01:56.083463   52815 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0422 18:01:56.083516   52815 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0422 18:01:56.083560   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:56.083866   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0422 18:01:56.132641   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0422 18:01:56.132675   52815 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0422 18:01:56.132711   52815 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0422 18:01:56.132749   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:56.135376   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0422 18:01:56.135471   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0422 18:01:56.166384   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0422 18:01:56.170065   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0422 18:01:56.172001   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0422 18:01:56.172003   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0422 18:01:56.172090   52815 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0422 18:01:56.172094   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0422 18:01:56.172133   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0422 18:01:56.172183   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0422 18:01:56.172261   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0422 18:01:56.173463   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0422 18:01:56.319100   52815 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0422 18:01:56.319155   52815 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0422 18:01:56.319170   52815 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0422 18:01:56.319188   52815 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0422 18:01:56.319204   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:56.319217   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:56.319263   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0422 18:01:56.319275   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0422 18:01:56.319363   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0422 18:01:56.687820   52815 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:01:58.927195   52815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.755038144s)
	I0422 18:01:58.927235   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0422 18:01:58.927239   52815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0: (2.754956894s)
	I0422 18:01:58.927258   52815 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0422 18:01:58.927280   52815 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0422 18:01:58.927304   52815 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0422 18:01:58.927313   52815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6: (2.753815673s)
	I0422 18:01:58.927340   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:58.927306   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0422 18:01:58.927348   52815 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0422 18:01:58.927339   52815 ssh_runner.go:235] Completed: which crictl: (2.60812253s)
	I0422 18:01:58.927374   52815 ssh_runner.go:235] Completed: which crictl: (2.608146877s)
	I0422 18:01:58.927382   52815 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0422 18:01:58.927397   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0422 18:01:58.927413   52815 ssh_runner.go:195] Run: which crictl
	I0422 18:01:58.927413   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0422 18:01:58.927452   52815 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.608051909s)
	I0422 18:01:58.927481   52815 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.239635851s)
	I0422 18:01:58.927486   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0422 18:01:59.408564   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0422 18:01:59.408612   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0422 18:01:59.408575   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0422 18:01:59.408656   52815 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0422 18:01:59.408700   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0422 18:01:59.408661   52815 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0422 18:01:59.408747   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0422 18:01:59.408672   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0422 18:01:59.408866   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0422 18:02:00.195619   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0422 18:02:00.195670   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0422 18:02:00.195704   52815 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0422 18:02:00.195707   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0422 18:02:00.195759   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0422 18:02:00.195793   52815 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0422 18:02:00.195829   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0422 18:02:00.195840   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0422 18:02:00.195896   52815 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0422 18:02:00.205282   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0422 18:02:00.205777   52815 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0422 18:02:00.943425   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0422 18:02:00.943483   52815 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0422 18:02:00.943551   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0422 18:02:01.088720   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0422 18:02:01.088774   52815 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0422 18:02:01.088833   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0422 18:02:03.238587   52815 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.149713084s)
	I0422 18:02:03.238623   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0422 18:02:03.238650   52815 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0422 18:02:03.238745   52815 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0422 18:02:03.684671   52815 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0422 18:02:03.684727   52815 cache_images.go:123] Successfully loaded all cached images
	I0422 18:02:03.684735   52815 cache_images.go:92] duration metric: took 7.868390142s to LoadCachedImages
	I0422 18:02:03.684749   52815 kubeadm.go:928] updating node { 192.168.39.176 8443 v1.24.4 crio true true} ...
	I0422 18:02:03.684903   52815 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-578761 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-578761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:02:03.684976   52815 ssh_runner.go:195] Run: crio config
	I0422 18:02:03.733687   52815 cni.go:84] Creating CNI manager for ""
	I0422 18:02:03.733711   52815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:02:03.733725   52815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:02:03.733742   52815 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-578761 NodeName:test-preload-578761 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:02:03.733869   52815 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-578761"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.176
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:02:03.733925   52815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0422 18:02:03.744058   52815 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:02:03.744124   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:02:03.753568   52815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0422 18:02:03.770696   52815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:02:03.787523   52815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0422 18:02:03.805645   52815 ssh_runner.go:195] Run: grep 192.168.39.176	control-plane.minikube.internal$ /etc/hosts
	I0422 18:02:03.809719   52815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.176	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:02:03.822343   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:02:03.954006   52815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:02:03.971856   52815 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761 for IP: 192.168.39.176
	I0422 18:02:03.971882   52815 certs.go:194] generating shared ca certs ...
	I0422 18:02:03.971902   52815 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:02:03.972114   52815 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:02:03.972169   52815 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:02:03.972185   52815 certs.go:256] generating profile certs ...
	I0422 18:02:03.972299   52815 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/client.key
	I0422 18:02:03.972375   52815 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/apiserver.key.bf433f09
	I0422 18:02:03.972428   52815 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/proxy-client.key
	I0422 18:02:03.972571   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:02:03.972613   52815 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:02:03.972626   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:02:03.972668   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:02:03.972696   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:02:03.972723   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:02:03.972764   52815 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:02:03.973417   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:02:04.002641   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:02:04.028538   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:02:04.067184   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:02:04.096601   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:02:04.136513   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:02:04.169977   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:02:04.196631   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:02:04.223004   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:02:04.249082   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:02:04.273990   52815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:02:04.299849   52815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:02:04.318723   52815 ssh_runner.go:195] Run: openssl version
	I0422 18:02:04.324872   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:02:04.336563   52815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:02:04.341210   52815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:02:04.341276   52815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:02:04.347194   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:02:04.358346   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:02:04.370462   52815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:02:04.375284   52815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:02:04.375349   52815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:02:04.381289   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:02:04.392622   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:02:04.403680   52815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:02:04.408415   52815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:02:04.408484   52815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:02:04.414143   52815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:02:04.424859   52815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:02:04.429465   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:02:04.435538   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:02:04.441329   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:02:04.447413   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:02:04.453225   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:02:04.459037   52815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:02:04.464750   52815 kubeadm.go:391] StartCluster: {Name:test-preload-578761 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-578761 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:02:04.464827   52815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:02:04.464887   52815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:02:04.508697   52815 cri.go:89] found id: ""
	I0422 18:02:04.508765   52815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:02:04.519174   52815 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:02:04.519197   52815 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:02:04.519202   52815 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:02:04.519246   52815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:02:04.528781   52815 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:02:04.529244   52815 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-578761" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:02:04.529349   52815 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-578761" cluster setting kubeconfig missing "test-preload-578761" context setting]
	I0422 18:02:04.529653   52815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:02:04.530245   52815 kapi.go:59] client config for test-preload-578761: &rest.Config{Host:"https://192.168.39.176:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 18:02:04.530749   52815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:02:04.539913   52815 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.176
	I0422 18:02:04.539947   52815 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:02:04.539960   52815 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:02:04.540010   52815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:02:04.577953   52815 cri.go:89] found id: ""
	I0422 18:02:04.578035   52815 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:02:04.595061   52815 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:02:04.604730   52815 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:02:04.604751   52815 kubeadm.go:156] found existing configuration files:
	
	I0422 18:02:04.604806   52815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:02:04.613766   52815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:02:04.613826   52815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:02:04.623380   52815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:02:04.632640   52815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:02:04.632719   52815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:02:04.642624   52815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:02:04.651870   52815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:02:04.651928   52815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:02:04.661598   52815 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:02:04.670728   52815 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:02:04.670798   52815 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:02:04.681102   52815 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:02:04.691426   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:02:04.789592   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:02:05.603220   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:02:05.870935   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:02:05.958858   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:02:06.024522   52815 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:02:06.024607   52815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:02:06.524955   52815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:02:07.025290   52815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:02:07.042974   52815 api_server.go:72] duration metric: took 1.018450013s to wait for apiserver process to appear ...
	I0422 18:02:07.043007   52815 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:02:07.043028   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:07.043586   52815 api_server.go:269] stopped: https://192.168.39.176:8443/healthz: Get "https://192.168.39.176:8443/healthz": dial tcp 192.168.39.176:8443: connect: connection refused
	I0422 18:02:07.543239   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:10.649381   52815 api_server.go:279] https://192.168.39.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:02:10.649411   52815 api_server.go:103] status: https://192.168.39.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:02:10.649424   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:10.707324   52815 api_server.go:279] https://192.168.39.176:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:02:10.707356   52815 api_server.go:103] status: https://192.168.39.176:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:02:11.043823   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:11.049084   52815 api_server.go:279] https://192.168.39.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:02:11.049117   52815 api_server.go:103] status: https://192.168.39.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:02:11.543795   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:11.549246   52815 api_server.go:279] https://192.168.39.176:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:02:11.549276   52815 api_server.go:103] status: https://192.168.39.176:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:02:12.043233   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:12.048860   52815 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I0422 18:02:12.056215   52815 api_server.go:141] control plane version: v1.24.4
	I0422 18:02:12.056239   52815 api_server.go:131] duration metric: took 5.013225657s to wait for apiserver health ...
	I0422 18:02:12.056247   52815 cni.go:84] Creating CNI manager for ""
	I0422 18:02:12.056252   52815 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:02:12.058202   52815 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:02:12.059758   52815 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:02:12.086460   52815 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:02:12.126253   52815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:02:12.141884   52815 system_pods.go:59] 8 kube-system pods found
	I0422 18:02:12.141917   52815 system_pods.go:61] "coredns-6d4b75cb6d-dm7g2" [f56d5bd5-f458-43dd-8622-3e848bf41a32] Running
	I0422 18:02:12.141924   52815 system_pods.go:61] "coredns-6d4b75cb6d-zmr6n" [40c1b9da-3105-4bbd-afec-295a0369900a] Running
	I0422 18:02:12.141928   52815 system_pods.go:61] "etcd-test-preload-578761" [396e0344-e0d8-4e6b-a330-0cb042e1431f] Running
	I0422 18:02:12.141936   52815 system_pods.go:61] "kube-apiserver-test-preload-578761" [dcd58e3e-8600-4aa4-ae03-7c315cf84d7b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:02:12.141942   52815 system_pods.go:61] "kube-controller-manager-test-preload-578761" [b5ae6cf2-5be3-47b4-b6be-5e2c27756035] Running
	I0422 18:02:12.141949   52815 system_pods.go:61] "kube-proxy-2k45z" [0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0] Running
	I0422 18:02:12.141954   52815 system_pods.go:61] "kube-scheduler-test-preload-578761" [931d2a13-568a-4cf7-8a2a-eec3b09caef4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:02:12.141959   52815 system_pods.go:61] "storage-provisioner" [88f90f35-0a89-4f56-945e-ccab1ab38fc9] Running
	I0422 18:02:12.141968   52815 system_pods.go:74] duration metric: took 15.663245ms to wait for pod list to return data ...
	I0422 18:02:12.141977   52815 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:02:12.148136   52815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:02:12.148164   52815 node_conditions.go:123] node cpu capacity is 2
	I0422 18:02:12.148177   52815 node_conditions.go:105] duration metric: took 6.194922ms to run NodePressure ...
	I0422 18:02:12.148197   52815 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:02:12.400503   52815 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:02:12.407449   52815 kubeadm.go:733] kubelet initialised
	I0422 18:02:12.407478   52815 kubeadm.go:734] duration metric: took 6.945995ms waiting for restarted kubelet to initialise ...
	I0422 18:02:12.407487   52815 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:02:12.418139   52815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:12.423906   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.423933   52815 pod_ready.go:81] duration metric: took 5.767543ms for pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:12.423946   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.423965   52815 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zmr6n" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:12.434241   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "coredns-6d4b75cb6d-zmr6n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.434283   52815 pod_ready.go:81] duration metric: took 10.2864ms for pod "coredns-6d4b75cb6d-zmr6n" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:12.434297   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "coredns-6d4b75cb6d-zmr6n" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.434306   52815 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:12.442471   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "etcd-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.442499   52815 pod_ready.go:81] duration metric: took 8.177018ms for pod "etcd-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:12.442511   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "etcd-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.442520   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:12.531771   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "kube-apiserver-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.531802   52815 pod_ready.go:81] duration metric: took 89.270507ms for pod "kube-apiserver-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:12.531811   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "kube-apiserver-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.531818   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:12.930087   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.930117   52815 pod_ready.go:81] duration metric: took 398.290232ms for pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:12.930129   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:12.930137   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-2k45z" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:13.330048   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "kube-proxy-2k45z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:13.330115   52815 pod_ready.go:81] duration metric: took 399.931236ms for pod "kube-proxy-2k45z" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:13.330175   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "kube-proxy-2k45z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:13.330191   52815 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:13.730218   52815 pod_ready.go:97] node "test-preload-578761" hosting pod "kube-scheduler-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:13.730252   52815 pod_ready.go:81] duration metric: took 400.051114ms for pod "kube-scheduler-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	E0422 18:02:13.730265   52815 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-578761" hosting pod "kube-scheduler-test-preload-578761" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:13.730312   52815 pod_ready.go:38] duration metric: took 1.322810202s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:02:13.730354   52815 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:02:13.743650   52815 ops.go:34] apiserver oom_adj: -16
	I0422 18:02:13.743669   52815 kubeadm.go:591] duration metric: took 9.224462803s to restartPrimaryControlPlane
	I0422 18:02:13.743678   52815 kubeadm.go:393] duration metric: took 9.278933497s to StartCluster
	I0422 18:02:13.743697   52815 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:02:13.743802   52815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:02:13.744541   52815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:02:13.744781   52815 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:02:13.746916   52815 out.go:177] * Verifying Kubernetes components...
	I0422 18:02:13.744828   52815 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:02:13.744977   52815 config.go:182] Loaded profile config "test-preload-578761": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0422 18:02:13.748649   52815 addons.go:69] Setting storage-provisioner=true in profile "test-preload-578761"
	I0422 18:02:13.748658   52815 addons.go:69] Setting default-storageclass=true in profile "test-preload-578761"
	I0422 18:02:13.748698   52815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-578761"
	I0422 18:02:13.748701   52815 addons.go:234] Setting addon storage-provisioner=true in "test-preload-578761"
	W0422 18:02:13.748712   52815 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:02:13.748736   52815 host.go:66] Checking if "test-preload-578761" exists ...
	I0422 18:02:13.748660   52815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:02:13.749028   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:02:13.749067   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:02:13.749162   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:02:13.749240   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:02:13.763816   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41715
	I0422 18:02:13.764282   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:02:13.764802   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:02:13.764843   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:02:13.765203   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:02:13.765385   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetState
	I0422 18:02:13.765673   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0422 18:02:13.766129   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:02:13.766701   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:02:13.766721   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:02:13.767080   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:02:13.767670   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:02:13.767717   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:02:13.767908   52815 kapi.go:59] client config for test-preload-578761: &rest.Config{Host:"https://192.168.39.176:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/client.crt", KeyFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/profiles/test-preload-578761/client.key", CAFile:"/home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02260), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0422 18:02:13.768214   52815 addons.go:234] Setting addon default-storageclass=true in "test-preload-578761"
	W0422 18:02:13.768234   52815 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:02:13.768265   52815 host.go:66] Checking if "test-preload-578761" exists ...
	I0422 18:02:13.768615   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:02:13.768653   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:02:13.782510   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0422 18:02:13.782669   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37291
	I0422 18:02:13.783060   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:02:13.783099   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:02:13.783617   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:02:13.783642   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:02:13.783624   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:02:13.783701   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:02:13.783963   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:02:13.784062   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:02:13.784237   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetState
	I0422 18:02:13.784528   52815 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:02:13.784576   52815 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:02:13.785874   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:02:13.788177   52815 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:02:13.789795   52815 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:02:13.789816   52815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:02:13.789837   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:02:13.792977   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:02:13.793483   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:02:13.793518   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:02:13.793670   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:02:13.793847   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:02:13.794027   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:02:13.794147   52815 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa Username:docker}
	I0422 18:02:13.805473   52815 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39705
	I0422 18:02:13.805860   52815 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:02:13.806418   52815 main.go:141] libmachine: Using API Version  1
	I0422 18:02:13.806448   52815 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:02:13.806738   52815 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:02:13.806969   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetState
	I0422 18:02:13.808651   52815 main.go:141] libmachine: (test-preload-578761) Calling .DriverName
	I0422 18:02:13.808935   52815 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:02:13.808950   52815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:02:13.808963   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHHostname
	I0422 18:02:13.811891   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:02:13.812513   52815 main.go:141] libmachine: (test-preload-578761) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:de:4f", ip: ""} in network mk-test-preload-578761: {Iface:virbr1 ExpiryTime:2024-04-22 19:01:43 +0000 UTC Type:0 Mac:52:54:00:47:de:4f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:test-preload-578761 Clientid:01:52:54:00:47:de:4f}
	I0422 18:02:13.812541   52815 main.go:141] libmachine: (test-preload-578761) DBG | domain test-preload-578761 has defined IP address 192.168.39.176 and MAC address 52:54:00:47:de:4f in network mk-test-preload-578761
	I0422 18:02:13.812728   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHPort
	I0422 18:02:13.812912   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHKeyPath
	I0422 18:02:13.813052   52815 main.go:141] libmachine: (test-preload-578761) Calling .GetSSHUsername
	I0422 18:02:13.813204   52815 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/test-preload-578761/id_rsa Username:docker}
	I0422 18:02:13.929692   52815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:02:13.947433   52815 node_ready.go:35] waiting up to 6m0s for node "test-preload-578761" to be "Ready" ...
	I0422 18:02:14.054790   52815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:02:14.057018   52815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:02:15.080950   52815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.02389538s)
	I0422 18:02:15.081010   52815 main.go:141] libmachine: Making call to close driver server
	I0422 18:02:15.081026   52815 main.go:141] libmachine: (test-preload-578761) Calling .Close
	I0422 18:02:15.081336   52815 main.go:141] libmachine: (test-preload-578761) DBG | Closing plugin on server side
	I0422 18:02:15.081380   52815 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:02:15.081391   52815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:02:15.081401   52815 main.go:141] libmachine: Making call to close driver server
	I0422 18:02:15.081410   52815 main.go:141] libmachine: (test-preload-578761) Calling .Close
	I0422 18:02:15.081647   52815 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:02:15.081679   52815 main.go:141] libmachine: (test-preload-578761) DBG | Closing plugin on server side
	I0422 18:02:15.081667   52815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026839017s)
	I0422 18:02:15.081716   52815 main.go:141] libmachine: Making call to close driver server
	I0422 18:02:15.081724   52815 main.go:141] libmachine: (test-preload-578761) Calling .Close
	I0422 18:02:15.081690   52815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:02:15.081967   52815 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:02:15.081986   52815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:02:15.082012   52815 main.go:141] libmachine: Making call to close driver server
	I0422 18:02:15.082021   52815 main.go:141] libmachine: (test-preload-578761) Calling .Close
	I0422 18:02:15.081972   52815 main.go:141] libmachine: (test-preload-578761) DBG | Closing plugin on server side
	I0422 18:02:15.082185   52815 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:02:15.082198   52815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:02:15.089318   52815 main.go:141] libmachine: Making call to close driver server
	I0422 18:02:15.089366   52815 main.go:141] libmachine: (test-preload-578761) Calling .Close
	I0422 18:02:15.089646   52815 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:02:15.089663   52815 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:02:15.091645   52815 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0422 18:02:15.092986   52815 addons.go:505] duration metric: took 1.348171564s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0422 18:02:15.953269   52815 node_ready.go:53] node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:18.452394   52815 node_ready.go:53] node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:20.453557   52815 node_ready.go:53] node "test-preload-578761" has status "Ready":"False"
	I0422 18:02:20.950702   52815 node_ready.go:49] node "test-preload-578761" has status "Ready":"True"
	I0422 18:02:20.950735   52815 node_ready.go:38] duration metric: took 7.003268586s for node "test-preload-578761" to be "Ready" ...
	I0422 18:02:20.950748   52815 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:02:20.956620   52815 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:20.961562   52815 pod_ready.go:92] pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:20.961588   52815 pod_ready.go:81] duration metric: took 4.940857ms for pod "coredns-6d4b75cb6d-dm7g2" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:20.961600   52815 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:22.983742   52815 pod_ready.go:102] pod "etcd-test-preload-578761" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:24.968478   52815 pod_ready.go:92] pod "etcd-test-preload-578761" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:24.968505   52815 pod_ready.go:81] duration metric: took 4.006898013s for pod "etcd-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.968518   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.974768   52815 pod_ready.go:92] pod "kube-apiserver-test-preload-578761" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:24.974799   52815 pod_ready.go:81] duration metric: took 6.271907ms for pod "kube-apiserver-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.974813   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.980106   52815 pod_ready.go:92] pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:24.980127   52815 pod_ready.go:81] duration metric: took 5.305922ms for pod "kube-controller-manager-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.980136   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2k45z" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.984056   52815 pod_ready.go:92] pod "kube-proxy-2k45z" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:24.984075   52815 pod_ready.go:81] duration metric: took 3.934055ms for pod "kube-proxy-2k45z" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.984084   52815 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.988802   52815 pod_ready.go:92] pod "kube-scheduler-test-preload-578761" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:24.988821   52815 pod_ready.go:81] duration metric: took 4.731818ms for pod "kube-scheduler-test-preload-578761" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:24.988830   52815 pod_ready.go:38] duration metric: took 4.03807173s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:02:24.988843   52815 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:02:24.988888   52815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:02:25.005220   52815 api_server.go:72] duration metric: took 11.260401311s to wait for apiserver process to appear ...
	I0422 18:02:25.005249   52815 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:02:25.005282   52815 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I0422 18:02:25.010522   52815 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I0422 18:02:25.011477   52815 api_server.go:141] control plane version: v1.24.4
	I0422 18:02:25.011500   52815 api_server.go:131] duration metric: took 6.243775ms to wait for apiserver health ...
	I0422 18:02:25.011511   52815 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:02:25.169317   52815 system_pods.go:59] 7 kube-system pods found
	I0422 18:02:25.169349   52815 system_pods.go:61] "coredns-6d4b75cb6d-dm7g2" [f56d5bd5-f458-43dd-8622-3e848bf41a32] Running
	I0422 18:02:25.169355   52815 system_pods.go:61] "etcd-test-preload-578761" [396e0344-e0d8-4e6b-a330-0cb042e1431f] Running
	I0422 18:02:25.169360   52815 system_pods.go:61] "kube-apiserver-test-preload-578761" [dcd58e3e-8600-4aa4-ae03-7c315cf84d7b] Running
	I0422 18:02:25.169366   52815 system_pods.go:61] "kube-controller-manager-test-preload-578761" [b5ae6cf2-5be3-47b4-b6be-5e2c27756035] Running
	I0422 18:02:25.169370   52815 system_pods.go:61] "kube-proxy-2k45z" [0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0] Running
	I0422 18:02:25.169375   52815 system_pods.go:61] "kube-scheduler-test-preload-578761" [931d2a13-568a-4cf7-8a2a-eec3b09caef4] Running
	I0422 18:02:25.169383   52815 system_pods.go:61] "storage-provisioner" [88f90f35-0a89-4f56-945e-ccab1ab38fc9] Running
	I0422 18:02:25.169392   52815 system_pods.go:74] duration metric: took 157.87473ms to wait for pod list to return data ...
	I0422 18:02:25.169402   52815 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:02:25.365591   52815 default_sa.go:45] found service account: "default"
	I0422 18:02:25.365614   52815 default_sa.go:55] duration metric: took 196.206113ms for default service account to be created ...
	I0422 18:02:25.365622   52815 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:02:25.569014   52815 system_pods.go:86] 7 kube-system pods found
	I0422 18:02:25.569044   52815 system_pods.go:89] "coredns-6d4b75cb6d-dm7g2" [f56d5bd5-f458-43dd-8622-3e848bf41a32] Running
	I0422 18:02:25.569052   52815 system_pods.go:89] "etcd-test-preload-578761" [396e0344-e0d8-4e6b-a330-0cb042e1431f] Running
	I0422 18:02:25.569058   52815 system_pods.go:89] "kube-apiserver-test-preload-578761" [dcd58e3e-8600-4aa4-ae03-7c315cf84d7b] Running
	I0422 18:02:25.569064   52815 system_pods.go:89] "kube-controller-manager-test-preload-578761" [b5ae6cf2-5be3-47b4-b6be-5e2c27756035] Running
	I0422 18:02:25.569068   52815 system_pods.go:89] "kube-proxy-2k45z" [0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0] Running
	I0422 18:02:25.569073   52815 system_pods.go:89] "kube-scheduler-test-preload-578761" [931d2a13-568a-4cf7-8a2a-eec3b09caef4] Running
	I0422 18:02:25.569078   52815 system_pods.go:89] "storage-provisioner" [88f90f35-0a89-4f56-945e-ccab1ab38fc9] Running
	I0422 18:02:25.569088   52815 system_pods.go:126] duration metric: took 203.459378ms to wait for k8s-apps to be running ...
	I0422 18:02:25.569096   52815 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:02:25.569142   52815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:02:25.583918   52815 system_svc.go:56] duration metric: took 14.814666ms WaitForService to wait for kubelet
	I0422 18:02:25.583947   52815 kubeadm.go:576] duration metric: took 11.839134496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:02:25.583970   52815 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:02:25.766470   52815 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:02:25.766496   52815 node_conditions.go:123] node cpu capacity is 2
	I0422 18:02:25.766506   52815 node_conditions.go:105] duration metric: took 182.531835ms to run NodePressure ...
	I0422 18:02:25.766517   52815 start.go:240] waiting for startup goroutines ...
	I0422 18:02:25.766523   52815 start.go:245] waiting for cluster config update ...
	I0422 18:02:25.766537   52815 start.go:254] writing updated cluster config ...
	I0422 18:02:25.766769   52815 ssh_runner.go:195] Run: rm -f paused
	I0422 18:02:25.811298   52815 start.go:600] kubectl: 1.30.0, cluster: 1.24.4 (minor skew: 6)
	I0422 18:02:25.813626   52815 out.go:177] 
	W0422 18:02:25.815099   52815 out.go:239] ! /usr/local/bin/kubectl is version 1.30.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0422 18:02:25.816514   52815 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0422 18:02:25.817987   52815 out.go:177] * Done! kubectl is now configured to use "test-preload-578761" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.748815929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808946748794931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98adf890-4b02-45bc-ba5c-62382fef85b3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.749579933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03fc1602-46bb-46d2-a16f-ad0605302a0e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.749630377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03fc1602-46bb-46d2-a16f-ad0605302a0e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.749775272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62c02957348d8d7151ab6b022ea1edcb6df39e5448a4cb9515318b6547f60180,PodSandboxId:d2c28e911d46081eedf531a4bc9337bd07d3a7ad4d0b42d7c8c9766315e9d15a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713808939417996884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dm7g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d5bd5-f458-43dd-8622-3e848bf41a32,},Annotations:map[string]string{io.kubernetes.container.hash: d80dff1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a55cb1248aca5000a5784c73a1fc075aa9d96291051a87983770f25d3f70bc,PodSandboxId:8ddcc0e628ec4273bf06cdd7e6cf4918ee1f25e6b7c42d86a06a375d60d2bb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713808932056386662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2k45z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: fcd53ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e300bdd2679718bdf2242ebc51e4bb73cb3326a35295b18c47a78358ff3b94a7,PodSandboxId:71787f0871ad0e29b4e1b7732f1686a596438d9d10b25fda6243c1705a9ae926,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808932051662609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88
f90f35-0a89-4f56-945e-ccab1ab38fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 9def666a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e3daa4dce5e70616a7b084af1aed926cda6245a23204fbe57cca5a04dec566,PodSandboxId:834c7b19c729359a5c5d271685d9b39ce037e691884cf40b4a1d7230d32d8d60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713808926800087507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48d42bea0879668a81d5c346bf373a2f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039885fcac57da45734ac0a29bc0bb40739e0db135df7f67bb8395701afd4003,PodSandboxId:ee695556acd57608c9af532ae6cfd7957c60a3ff0ff581c1a7c8f6f02e3255d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713808926741676305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c475a985350ba676865613a
ebc727dd6,},Annotations:map[string]string{io.kubernetes.container.hash: e8f3470a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87e0a71c957a8646866970d44aad5416cede29bc171d07a0fd2fa3d49670ad1,PodSandboxId:25c285b4472c6b7188c0357a4ff7dc6c37e5d5d89cd46c6bb0dae136381b5dd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713808926714101522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d6cffcc69149ab822b0a217fa56f64b,}
,Annotations:map[string]string{io.kubernetes.container.hash: f13637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5effb8344d08dbf5f75463b38ef09711775ccd3c7e7ca5419098de003043b216,PodSandboxId:3550b970556b83f74582667538b7a8aa03b6427e402e644c837a17fd8692c0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713808926708506856,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e62c5c19124180d52b5055efb1ce37,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03fc1602-46bb-46d2-a16f-ad0605302a0e name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.790934075Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a00073a-dc86-4184-9e77-51cbf6c25b38 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.791010963Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a00073a-dc86-4184-9e77-51cbf6c25b38 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.792513660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f11a3ea-5aae-4986-a113-39bc01481335 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.792924042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808946792904454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f11a3ea-5aae-4986-a113-39bc01481335 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.793482433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e085611-0fe7-43bb-affc-311158aea486 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.793531631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e085611-0fe7-43bb-affc-311158aea486 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.793677196Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62c02957348d8d7151ab6b022ea1edcb6df39e5448a4cb9515318b6547f60180,PodSandboxId:d2c28e911d46081eedf531a4bc9337bd07d3a7ad4d0b42d7c8c9766315e9d15a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713808939417996884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dm7g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d5bd5-f458-43dd-8622-3e848bf41a32,},Annotations:map[string]string{io.kubernetes.container.hash: d80dff1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a55cb1248aca5000a5784c73a1fc075aa9d96291051a87983770f25d3f70bc,PodSandboxId:8ddcc0e628ec4273bf06cdd7e6cf4918ee1f25e6b7c42d86a06a375d60d2bb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713808932056386662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2k45z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: fcd53ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e300bdd2679718bdf2242ebc51e4bb73cb3326a35295b18c47a78358ff3b94a7,PodSandboxId:71787f0871ad0e29b4e1b7732f1686a596438d9d10b25fda6243c1705a9ae926,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808932051662609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88
f90f35-0a89-4f56-945e-ccab1ab38fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 9def666a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e3daa4dce5e70616a7b084af1aed926cda6245a23204fbe57cca5a04dec566,PodSandboxId:834c7b19c729359a5c5d271685d9b39ce037e691884cf40b4a1d7230d32d8d60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713808926800087507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48d42bea0879668a81d5c346bf373a2f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039885fcac57da45734ac0a29bc0bb40739e0db135df7f67bb8395701afd4003,PodSandboxId:ee695556acd57608c9af532ae6cfd7957c60a3ff0ff581c1a7c8f6f02e3255d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713808926741676305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c475a985350ba676865613a
ebc727dd6,},Annotations:map[string]string{io.kubernetes.container.hash: e8f3470a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87e0a71c957a8646866970d44aad5416cede29bc171d07a0fd2fa3d49670ad1,PodSandboxId:25c285b4472c6b7188c0357a4ff7dc6c37e5d5d89cd46c6bb0dae136381b5dd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713808926714101522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d6cffcc69149ab822b0a217fa56f64b,}
,Annotations:map[string]string{io.kubernetes.container.hash: f13637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5effb8344d08dbf5f75463b38ef09711775ccd3c7e7ca5419098de003043b216,PodSandboxId:3550b970556b83f74582667538b7a8aa03b6427e402e644c837a17fd8692c0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713808926708506856,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e62c5c19124180d52b5055efb1ce37,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e085611-0fe7-43bb-affc-311158aea486 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.834108296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40f83167-742b-465b-b58d-35a11f8da3cf name=/runtime.v1.RuntimeService/Version
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.834188782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40f83167-742b-465b-b58d-35a11f8da3cf name=/runtime.v1.RuntimeService/Version
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.836078600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87358997-fec2-4291-858e-9d729c08906e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.836705993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808946836681468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87358997-fec2-4291-858e-9d729c08906e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.837497924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37c76cba-25fe-42dd-a11c-52c295675ef7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.837547337Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37c76cba-25fe-42dd-a11c-52c295675ef7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.837723601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62c02957348d8d7151ab6b022ea1edcb6df39e5448a4cb9515318b6547f60180,PodSandboxId:d2c28e911d46081eedf531a4bc9337bd07d3a7ad4d0b42d7c8c9766315e9d15a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713808939417996884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dm7g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d5bd5-f458-43dd-8622-3e848bf41a32,},Annotations:map[string]string{io.kubernetes.container.hash: d80dff1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a55cb1248aca5000a5784c73a1fc075aa9d96291051a87983770f25d3f70bc,PodSandboxId:8ddcc0e628ec4273bf06cdd7e6cf4918ee1f25e6b7c42d86a06a375d60d2bb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713808932056386662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2k45z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: fcd53ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e300bdd2679718bdf2242ebc51e4bb73cb3326a35295b18c47a78358ff3b94a7,PodSandboxId:71787f0871ad0e29b4e1b7732f1686a596438d9d10b25fda6243c1705a9ae926,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808932051662609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88
f90f35-0a89-4f56-945e-ccab1ab38fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 9def666a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e3daa4dce5e70616a7b084af1aed926cda6245a23204fbe57cca5a04dec566,PodSandboxId:834c7b19c729359a5c5d271685d9b39ce037e691884cf40b4a1d7230d32d8d60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713808926800087507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48d42bea0879668a81d5c346bf373a2f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039885fcac57da45734ac0a29bc0bb40739e0db135df7f67bb8395701afd4003,PodSandboxId:ee695556acd57608c9af532ae6cfd7957c60a3ff0ff581c1a7c8f6f02e3255d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713808926741676305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c475a985350ba676865613a
ebc727dd6,},Annotations:map[string]string{io.kubernetes.container.hash: e8f3470a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87e0a71c957a8646866970d44aad5416cede29bc171d07a0fd2fa3d49670ad1,PodSandboxId:25c285b4472c6b7188c0357a4ff7dc6c37e5d5d89cd46c6bb0dae136381b5dd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713808926714101522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d6cffcc69149ab822b0a217fa56f64b,}
,Annotations:map[string]string{io.kubernetes.container.hash: f13637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5effb8344d08dbf5f75463b38ef09711775ccd3c7e7ca5419098de003043b216,PodSandboxId:3550b970556b83f74582667538b7a8aa03b6427e402e644c837a17fd8692c0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713808926708506856,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e62c5c19124180d52b5055efb1ce37,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37c76cba-25fe-42dd-a11c-52c295675ef7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.874138034Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9625f8e6-c04d-4439-9b22-9cdc0431f30f name=/runtime.v1.RuntimeService/Version
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.874261378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9625f8e6-c04d-4439-9b22-9cdc0431f30f name=/runtime.v1.RuntimeService/Version
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.875328468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe165f90-53e3-4911-bd30-4273c0da3270 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.875745235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713808946875725296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe165f90-53e3-4911-bd30-4273c0da3270 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.876562075Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65c5833e-6bae-4b28-a945-bcebc068b7c4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.876615685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65c5833e-6bae-4b28-a945-bcebc068b7c4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:02:26 test-preload-578761 crio[702]: time="2024-04-22 18:02:26.876767705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:62c02957348d8d7151ab6b022ea1edcb6df39e5448a4cb9515318b6547f60180,PodSandboxId:d2c28e911d46081eedf531a4bc9337bd07d3a7ad4d0b42d7c8c9766315e9d15a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713808939417996884,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-dm7g2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f56d5bd5-f458-43dd-8622-3e848bf41a32,},Annotations:map[string]string{io.kubernetes.container.hash: d80dff1f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81a55cb1248aca5000a5784c73a1fc075aa9d96291051a87983770f25d3f70bc,PodSandboxId:8ddcc0e628ec4273bf06cdd7e6cf4918ee1f25e6b7c42d86a06a375d60d2bb8d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713808932056386662,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2k45z,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0,},Annotations:map[string]string{io.kubernetes.container.hash: fcd53ef3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e300bdd2679718bdf2242ebc51e4bb73cb3326a35295b18c47a78358ff3b94a7,PodSandboxId:71787f0871ad0e29b4e1b7732f1686a596438d9d10b25fda6243c1705a9ae926,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713808932051662609,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88
f90f35-0a89-4f56-945e-ccab1ab38fc9,},Annotations:map[string]string{io.kubernetes.container.hash: 9def666a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87e3daa4dce5e70616a7b084af1aed926cda6245a23204fbe57cca5a04dec566,PodSandboxId:834c7b19c729359a5c5d271685d9b39ce037e691884cf40b4a1d7230d32d8d60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713808926800087507,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 48d42bea0879668a81d5c346bf373a2f,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039885fcac57da45734ac0a29bc0bb40739e0db135df7f67bb8395701afd4003,PodSandboxId:ee695556acd57608c9af532ae6cfd7957c60a3ff0ff581c1a7c8f6f02e3255d6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713808926741676305,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c475a985350ba676865613a
ebc727dd6,},Annotations:map[string]string{io.kubernetes.container.hash: e8f3470a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e87e0a71c957a8646866970d44aad5416cede29bc171d07a0fd2fa3d49670ad1,PodSandboxId:25c285b4472c6b7188c0357a4ff7dc6c37e5d5d89cd46c6bb0dae136381b5dd7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713808926714101522,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d6cffcc69149ab822b0a217fa56f64b,}
,Annotations:map[string]string{io.kubernetes.container.hash: f13637,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5effb8344d08dbf5f75463b38ef09711775ccd3c7e7ca5419098de003043b216,PodSandboxId:3550b970556b83f74582667538b7a8aa03b6427e402e644c837a17fd8692c0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713808926708506856,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-578761,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05e62c5c19124180d52b5055efb1ce37,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65c5833e-6bae-4b28-a945-bcebc068b7c4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	62c02957348d8       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   d2c28e911d460       coredns-6d4b75cb6d-dm7g2
	81a55cb1248ac       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   8ddcc0e628ec4       kube-proxy-2k45z
	e300bdd267971       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   71787f0871ad0       storage-provisioner
	87e3daa4dce5e       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   834c7b19c7293       kube-controller-manager-test-preload-578761
	039885fcac57d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   ee695556acd57       etcd-test-preload-578761
	e87e0a71c957a       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   25c285b4472c6       kube-apiserver-test-preload-578761
	5effb8344d08d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   3550b970556b8       kube-scheduler-test-preload-578761
	
	
	==> coredns [62c02957348d8d7151ab6b022ea1edcb6df39e5448a4cb9515318b6547f60180] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47665 - 37196 "HINFO IN 8693898555867630847.3397448121222605068. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01053376s
	
	
	==> describe nodes <==
	Name:               test-preload-578761
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-578761
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=test-preload-578761
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_00_51_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:00:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-578761
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:02:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:02:20 +0000   Mon, 22 Apr 2024 18:00:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:02:20 +0000   Mon, 22 Apr 2024 18:00:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:02:20 +0000   Mon, 22 Apr 2024 18:00:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:02:20 +0000   Mon, 22 Apr 2024 18:02:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    test-preload-578761
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac7ba1ea058147b3bd9a563292c02d42
	  System UUID:                ac7ba1ea-0581-47b3-bd9a-563292c02d42
	  Boot ID:                    6bbb150f-6228-4b9b-8c3a-8fa83720ed96
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dm7g2                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     83s
	  kube-system                 etcd-test-preload-578761                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         96s
	  kube-system                 kube-apiserver-test-preload-578761             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-test-preload-578761    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-2k45z                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-scheduler-test-preload-578761             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 81s                  kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x4 over 104s)  kubelet          Node test-preload-578761 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     104s (x3 over 104s)  kubelet          Node test-preload-578761 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    104s (x4 over 104s)  kubelet          Node test-preload-578761 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node test-preload-578761 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-578761 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node test-preload-578761 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                  kubelet          Node test-preload-578761 status is now: NodeReady
	  Normal  RegisteredNode           84s                  node-controller  Node test-preload-578761 event: Registered Node test-preload-578761 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-578761 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-578761 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-578761 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-578761 event: Registered Node test-preload-578761 in Controller
	
	
	==> dmesg <==
	[Apr22 18:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052335] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041067] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.591299] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.917204] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.616031] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.092162] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.059342] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058480] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.208352] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.131628] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.290608] systemd-fstab-generator[688]: Ignoring "noauto" option for root device
	[Apr22 18:02] systemd-fstab-generator[958]: Ignoring "noauto" option for root device
	[  +0.061842] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.846735] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +6.193598] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.848070] systemd-fstab-generator[1718]: Ignoring "noauto" option for root device
	[  +5.394356] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [039885fcac57da45734ac0a29bc0bb40739e0db135df7f67bb8395701afd4003] <==
	{"level":"info","ts":"2024-04-22T18:02:07.123Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"f70d523d4475ce3b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-22T18:02:07.124Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-22T18:02:07.127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b switched to configuration voters=(17801975325160492603)"}
	{"level":"info","ts":"2024-04-22T18:02:07.127Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","added-peer-id":"f70d523d4475ce3b","added-peer-peer-urls":["https://192.168.39.176:2380"]}
	{"level":"info","ts":"2024-04-22T18:02:07.127Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"40fea5b1ef9207e7","local-member-id":"f70d523d4475ce3b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:02:07.127Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:02:07.131Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:02:07.131Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f70d523d4475ce3b","initial-advertise-peer-urls":["https://192.168.39.176:2380"],"listen-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.176:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:02:07.131Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:02:07.131Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-22T18:02:07.131Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgPreVoteResp from f70d523d4475ce3b at term 2"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b received MsgVoteResp from f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70d523d4475ce3b became leader at term 3"}
	{"level":"info","ts":"2024-04-22T18:02:08.169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70d523d4475ce3b elected leader f70d523d4475ce3b at term 3"}
	{"level":"info","ts":"2024-04-22T18:02:08.171Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f70d523d4475ce3b","local-member-attributes":"{Name:test-preload-578761 ClientURLs:[https://192.168.39.176:2379]}","request-path":"/0/members/f70d523d4475ce3b/attributes","cluster-id":"40fea5b1ef9207e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:02:08.171Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:02:08.172Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:02:08.173Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.176:2379"}
	{"level":"info","ts":"2024-04-22T18:02:08.173Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:02:08.173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:02:08.173Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:02:27 up 0 min,  0 users,  load average: 0.64, 0.18, 0.06
	Linux test-preload-578761 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e87e0a71c957a8646866970d44aad5416cede29bc171d07a0fd2fa3d49670ad1] <==
	I0422 18:02:10.632972       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0422 18:02:10.632990       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0422 18:02:10.637494       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0422 18:02:10.637527       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0422 18:02:10.637579       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 18:02:10.645748       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 18:02:10.715688       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0422 18:02:10.730513       1 cache.go:39] Caches are synced for autoregister controller
	E0422 18:02:10.730845       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0422 18:02:10.733186       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0422 18:02:10.737720       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0422 18:02:10.772131       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 18:02:10.796278       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0422 18:02:10.799716       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 18:02:10.802924       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 18:02:11.291265       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0422 18:02:11.622140       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 18:02:12.305456       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0422 18:02:12.315749       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0422 18:02:12.355409       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0422 18:02:12.375545       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 18:02:12.384037       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0422 18:02:12.465160       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0422 18:02:23.036889       1 controller.go:611] quota admission added evaluator for: endpoints
	I0422 18:02:23.051719       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [87e3daa4dce5e70616a7b084af1aed926cda6245a23204fbe57cca5a04dec566] <==
	I0422 18:02:23.056456       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0422 18:02:23.064179       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0422 18:02:23.064321       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0422 18:02:23.067662       1 shared_informer.go:262] Caches are synced for service account
	I0422 18:02:23.090292       1 shared_informer.go:262] Caches are synced for namespace
	I0422 18:02:23.090391       1 shared_informer.go:262] Caches are synced for node
	I0422 18:02:23.090410       1 range_allocator.go:173] Starting range CIDR allocator
	I0422 18:02:23.090414       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0422 18:02:23.090431       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0422 18:02:23.150195       1 shared_informer.go:262] Caches are synced for disruption
	I0422 18:02:23.150281       1 disruption.go:371] Sending events to api server.
	I0422 18:02:23.161990       1 shared_informer.go:262] Caches are synced for ephemeral
	I0422 18:02:23.168612       1 shared_informer.go:262] Caches are synced for expand
	I0422 18:02:23.194483       1 shared_informer.go:262] Caches are synced for persistent volume
	I0422 18:02:23.202847       1 shared_informer.go:262] Caches are synced for stateful set
	I0422 18:02:23.206297       1 shared_informer.go:262] Caches are synced for PVC protection
	I0422 18:02:23.213188       1 shared_informer.go:262] Caches are synced for attach detach
	I0422 18:02:23.238838       1 shared_informer.go:262] Caches are synced for job
	I0422 18:02:23.242300       1 shared_informer.go:262] Caches are synced for resource quota
	I0422 18:02:23.266161       1 shared_informer.go:262] Caches are synced for resource quota
	I0422 18:02:23.270796       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0422 18:02:23.307069       1 shared_informer.go:262] Caches are synced for cronjob
	I0422 18:02:23.723107       1 shared_informer.go:262] Caches are synced for garbage collector
	I0422 18:02:23.734639       1 shared_informer.go:262] Caches are synced for garbage collector
	I0422 18:02:23.734681       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [81a55cb1248aca5000a5784c73a1fc075aa9d96291051a87983770f25d3f70bc] <==
	I0422 18:02:12.417273       1 node.go:163] Successfully retrieved node IP: 192.168.39.176
	I0422 18:02:12.417684       1 server_others.go:138] "Detected node IP" address="192.168.39.176"
	I0422 18:02:12.417816       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0422 18:02:12.456289       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0422 18:02:12.456373       1 server_others.go:206] "Using iptables Proxier"
	I0422 18:02:12.456405       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0422 18:02:12.457385       1 server.go:661] "Version info" version="v1.24.4"
	I0422 18:02:12.457434       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:02:12.458300       1 config.go:317] "Starting service config controller"
	I0422 18:02:12.458726       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0422 18:02:12.458775       1 config.go:444] "Starting node config controller"
	I0422 18:02:12.458792       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0422 18:02:12.459651       1 config.go:226] "Starting endpoint slice config controller"
	I0422 18:02:12.459681       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0422 18:02:12.559001       1 shared_informer.go:262] Caches are synced for node config
	I0422 18:02:12.559059       1 shared_informer.go:262] Caches are synced for service config
	I0422 18:02:12.560306       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5effb8344d08dbf5f75463b38ef09711775ccd3c7e7ca5419098de003043b216] <==
	I0422 18:02:07.499018       1 serving.go:348] Generated self-signed cert in-memory
	W0422 18:02:10.657742       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:02:10.658175       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:02:10.658491       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:02:10.658663       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:02:10.716635       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0422 18:02:10.716746       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:02:10.728302       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0422 18:02:10.728481       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 18:02:10.728517       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:02:10.728537       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:02:10.829388       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.058545    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume\") pod \"coredns-6d4b75cb6d-dm7g2\" (UID: \"f56d5bd5-f458-43dd-8622-3e848bf41a32\") " pod="kube-system/coredns-6d4b75cb6d-dm7g2"
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.058570    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44z5w\" (UniqueName: \"kubernetes.io/projected/f56d5bd5-f458-43dd-8622-3e848bf41a32-kube-api-access-44z5w\") pod \"coredns-6d4b75cb6d-dm7g2\" (UID: \"f56d5bd5-f458-43dd-8622-3e848bf41a32\") " pod="kube-system/coredns-6d4b75cb6d-dm7g2"
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.058592    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0-xtables-lock\") pod \"kube-proxy-2k45z\" (UID: \"0ac0cc9c-dd7b-46bd-bcc7-350191b3d1d0\") " pod="kube-system/kube-proxy-2k45z"
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.058618    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf24z\" (UniqueName: \"kubernetes.io/projected/88f90f35-0a89-4f56-945e-ccab1ab38fc9-kube-api-access-gf24z\") pod \"storage-provisioner\" (UID: \"88f90f35-0a89-4f56-945e-ccab1ab38fc9\") " pod="kube-system/storage-provisioner"
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.058662    1093 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/88f90f35-0a89-4f56-945e-ccab1ab38fc9-tmp\") pod \"storage-provisioner\" (UID: \"88f90f35-0a89-4f56-945e-ccab1ab38fc9\") " pod="kube-system/storage-provisioner"
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.058676    1093 reconciler.go:159] "Reconciler: start to sync state"
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.372513    1093 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlxsj\" (UniqueName: \"kubernetes.io/projected/40c1b9da-3105-4bbd-afec-295a0369900a-kube-api-access-hlxsj\") pod \"40c1b9da-3105-4bbd-afec-295a0369900a\" (UID: \"40c1b9da-3105-4bbd-afec-295a0369900a\") "
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.372587    1093 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c1b9da-3105-4bbd-afec-295a0369900a-config-volume\") pod \"40c1b9da-3105-4bbd-afec-295a0369900a\" (UID: \"40c1b9da-3105-4bbd-afec-295a0369900a\") "
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: W0422 18:02:11.373940    1093 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/40c1b9da-3105-4bbd-afec-295a0369900a/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.374420    1093 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c1b9da-3105-4bbd-afec-295a0369900a-config-volume" (OuterVolumeSpecName: "config-volume") pod "40c1b9da-3105-4bbd-afec-295a0369900a" (UID: "40c1b9da-3105-4bbd-afec-295a0369900a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: E0422 18:02:11.374509    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: E0422 18:02:11.374559    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume podName:f56d5bd5-f458-43dd-8622-3e848bf41a32 nodeName:}" failed. No retries permitted until 2024-04-22 18:02:11.874540265 +0000 UTC m=+6.012226010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume") pod "coredns-6d4b75cb6d-dm7g2" (UID: "f56d5bd5-f458-43dd-8622-3e848bf41a32") : object "kube-system"/"coredns" not registered
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: W0422 18:02:11.375687    1093 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/40c1b9da-3105-4bbd-afec-295a0369900a/volumes/kubernetes.io~projected/kube-api-access-hlxsj: clearQuota called, but quotas disabled
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.376186    1093 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c1b9da-3105-4bbd-afec-295a0369900a-kube-api-access-hlxsj" (OuterVolumeSpecName: "kube-api-access-hlxsj") pod "40c1b9da-3105-4bbd-afec-295a0369900a" (UID: "40c1b9da-3105-4bbd-afec-295a0369900a"). InnerVolumeSpecName "kube-api-access-hlxsj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.473396    1093 reconciler.go:384] "Volume detached for volume \"kube-api-access-hlxsj\" (UniqueName: \"kubernetes.io/projected/40c1b9da-3105-4bbd-afec-295a0369900a-kube-api-access-hlxsj\") on node \"test-preload-578761\" DevicePath \"\""
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: I0422 18:02:11.473566    1093 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c1b9da-3105-4bbd-afec-295a0369900a-config-volume\") on node \"test-preload-578761\" DevicePath \"\""
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: E0422 18:02:11.876107    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 22 18:02:11 test-preload-578761 kubelet[1093]: E0422 18:02:11.876166    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume podName:f56d5bd5-f458-43dd-8622-3e848bf41a32 nodeName:}" failed. No retries permitted until 2024-04-22 18:02:12.876149961 +0000 UTC m=+7.013835721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume") pod "coredns-6d4b75cb6d-dm7g2" (UID: "f56d5bd5-f458-43dd-8622-3e848bf41a32") : object "kube-system"/"coredns" not registered
	Apr 22 18:02:12 test-preload-578761 kubelet[1093]: E0422 18:02:12.883449    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 22 18:02:12 test-preload-578761 kubelet[1093]: E0422 18:02:12.883959    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume podName:f56d5bd5-f458-43dd-8622-3e848bf41a32 nodeName:}" failed. No retries permitted until 2024-04-22 18:02:14.883935375 +0000 UTC m=+9.021621127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume") pod "coredns-6d4b75cb6d-dm7g2" (UID: "f56d5bd5-f458-43dd-8622-3e848bf41a32") : object "kube-system"/"coredns" not registered
	Apr 22 18:02:13 test-preload-578761 kubelet[1093]: E0422 18:02:13.097659    1093 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dm7g2" podUID=f56d5bd5-f458-43dd-8622-3e848bf41a32
	Apr 22 18:02:14 test-preload-578761 kubelet[1093]: I0422 18:02:14.107548    1093 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=40c1b9da-3105-4bbd-afec-295a0369900a path="/var/lib/kubelet/pods/40c1b9da-3105-4bbd-afec-295a0369900a/volumes"
	Apr 22 18:02:14 test-preload-578761 kubelet[1093]: E0422 18:02:14.901863    1093 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 22 18:02:14 test-preload-578761 kubelet[1093]: E0422 18:02:14.901976    1093 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume podName:f56d5bd5-f458-43dd-8622-3e848bf41a32 nodeName:}" failed. No retries permitted until 2024-04-22 18:02:18.901957695 +0000 UTC m=+13.039643456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f56d5bd5-f458-43dd-8622-3e848bf41a32-config-volume") pod "coredns-6d4b75cb6d-dm7g2" (UID: "f56d5bd5-f458-43dd-8622-3e848bf41a32") : object "kube-system"/"coredns" not registered
	Apr 22 18:02:15 test-preload-578761 kubelet[1093]: E0422 18:02:15.097719    1093 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-dm7g2" podUID=f56d5bd5-f458-43dd-8622-3e848bf41a32
	
	
	==> storage-provisioner [e300bdd2679718bdf2242ebc51e4bb73cb3326a35295b18c47a78358ff3b94a7] <==
	I0422 18:02:12.237970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-578761 -n test-preload-578761
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-578761 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-578761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-578761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-578761: (1.126659653s)
--- FAIL: TestPreload (172.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (421.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m59.077149826s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-432126] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-432126" primary control-plane node in "kubernetes-upgrade-432126" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:04:22.677866   54374 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:04:22.678136   54374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:04:22.678147   54374 out.go:304] Setting ErrFile to fd 2...
	I0422 18:04:22.678153   54374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:04:22.678410   54374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:04:22.679070   54374 out.go:298] Setting JSON to false
	I0422 18:04:22.680108   54374 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6408,"bootTime":1713802655,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:04:22.680179   54374 start.go:139] virtualization: kvm guest
	I0422 18:04:22.682405   54374 out.go:177] * [kubernetes-upgrade-432126] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:04:22.686494   54374 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:04:22.685095   54374 notify.go:220] Checking for updates...
	I0422 18:04:22.689456   54374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:04:22.692146   54374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:04:22.694740   54374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:04:22.697961   54374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:04:22.700603   54374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:04:22.702022   54374 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:04:22.740534   54374 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:04:22.741758   54374 start.go:297] selected driver: kvm2
	I0422 18:04:22.741770   54374 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:04:22.741779   54374 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:04:22.742758   54374 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:04:22.757298   54374 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:04:22.774042   54374 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:04:22.774125   54374 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 18:04:22.774396   54374 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 18:04:22.774473   54374 cni.go:84] Creating CNI manager for ""
	I0422 18:04:22.774493   54374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:04:22.774505   54374 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 18:04:22.774605   54374 start.go:340] cluster config:
	{Name:kubernetes-upgrade-432126 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:04:22.774738   54374 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:04:22.776570   54374 out.go:177] * Starting "kubernetes-upgrade-432126" primary control-plane node in "kubernetes-upgrade-432126" cluster
	I0422 18:04:22.778151   54374 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:04:22.778196   54374 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:04:22.778206   54374 cache.go:56] Caching tarball of preloaded images
	I0422 18:04:22.778313   54374 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:04:22.778327   54374 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:04:22.778751   54374 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/config.json ...
	I0422 18:04:22.778778   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/config.json: {Name:mk714f259be17f1286451abd0d9edc25df208c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:04:22.778939   54374 start.go:360] acquireMachinesLock for kubernetes-upgrade-432126: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:04:47.224258   54374 start.go:364] duration metric: took 24.445291635s to acquireMachinesLock for "kubernetes-upgrade-432126"
	I0422 18:04:47.224339   54374 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-432126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:04:47.224468   54374 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:04:47.226676   54374 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 18:04:47.226873   54374 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:04:47.226918   54374 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:04:47.243915   54374 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0422 18:04:47.244418   54374 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:04:47.245021   54374 main.go:141] libmachine: Using API Version  1
	I0422 18:04:47.245049   54374 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:04:47.245368   54374 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:04:47.245565   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetMachineName
	I0422 18:04:47.245757   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:04:47.245925   54374 start.go:159] libmachine.API.Create for "kubernetes-upgrade-432126" (driver="kvm2")
	I0422 18:04:47.245966   54374 client.go:168] LocalClient.Create starting
	I0422 18:04:47.246002   54374 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 18:04:47.246041   54374 main.go:141] libmachine: Decoding PEM data...
	I0422 18:04:47.246064   54374 main.go:141] libmachine: Parsing certificate...
	I0422 18:04:47.246130   54374 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 18:04:47.246161   54374 main.go:141] libmachine: Decoding PEM data...
	I0422 18:04:47.246178   54374 main.go:141] libmachine: Parsing certificate...
	I0422 18:04:47.246206   54374 main.go:141] libmachine: Running pre-create checks...
	I0422 18:04:47.246222   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .PreCreateCheck
	I0422 18:04:47.247543   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetConfigRaw
	I0422 18:04:47.248009   54374 main.go:141] libmachine: Creating machine...
	I0422 18:04:47.248027   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .Create
	I0422 18:04:47.248173   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Creating KVM machine...
	I0422 18:04:47.249298   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found existing default KVM network
	I0422 18:04:47.250503   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:47.250329   56844 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:11:13:fb} reservation:<nil>}
	I0422 18:04:47.251254   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:47.251145   56844 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000254330}
	I0422 18:04:47.251274   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | created network xml: 
	I0422 18:04:47.251286   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | <network>
	I0422 18:04:47.251585   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |   <name>mk-kubernetes-upgrade-432126</name>
	I0422 18:04:47.251612   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |   <dns enable='no'/>
	I0422 18:04:47.251624   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |   
	I0422 18:04:47.251634   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0422 18:04:47.251644   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |     <dhcp>
	I0422 18:04:47.251656   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0422 18:04:47.251665   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |     </dhcp>
	I0422 18:04:47.251676   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |   </ip>
	I0422 18:04:47.251684   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG |   
	I0422 18:04:47.251691   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | </network>
	I0422 18:04:47.251707   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | 
	I0422 18:04:47.257283   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | trying to create private KVM network mk-kubernetes-upgrade-432126 192.168.50.0/24...
	I0422 18:04:47.334513   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | private KVM network mk-kubernetes-upgrade-432126 192.168.50.0/24 created
	I0422 18:04:47.334551   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126 ...
	I0422 18:04:47.334569   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:47.334447   56844 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:04:47.334595   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 18:04:47.334621   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 18:04:47.564921   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:47.564768   56844 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa...
	I0422 18:04:47.866100   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:47.865948   56844 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/kubernetes-upgrade-432126.rawdisk...
	I0422 18:04:47.866130   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Writing magic tar header
	I0422 18:04:47.866148   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Writing SSH key tar header
	I0422 18:04:47.866162   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:47.866098   56844 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126 ...
	I0422 18:04:47.866214   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126
	I0422 18:04:47.866304   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 18:04:47.866348   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:04:47.866358   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126 (perms=drwx------)
	I0422 18:04:47.866375   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 18:04:47.866384   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 18:04:47.866394   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 18:04:47.866404   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 18:04:47.866414   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 18:04:47.866423   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Creating domain...
	I0422 18:04:47.866466   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 18:04:47.866492   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 18:04:47.866516   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home/jenkins
	I0422 18:04:47.866539   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Checking permissions on dir: /home
	I0422 18:04:47.866562   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Skipping /home - not owner
	I0422 18:04:47.867566   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) define libvirt domain using xml: 
	I0422 18:04:47.867588   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) <domain type='kvm'>
	I0422 18:04:47.867611   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <name>kubernetes-upgrade-432126</name>
	I0422 18:04:47.867619   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <memory unit='MiB'>2200</memory>
	I0422 18:04:47.867633   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <vcpu>2</vcpu>
	I0422 18:04:47.867644   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <features>
	I0422 18:04:47.867674   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <acpi/>
	I0422 18:04:47.867697   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <apic/>
	I0422 18:04:47.867719   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <pae/>
	I0422 18:04:47.867740   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     
	I0422 18:04:47.867803   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   </features>
	I0422 18:04:47.867832   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <cpu mode='host-passthrough'>
	I0422 18:04:47.867842   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   
	I0422 18:04:47.867856   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   </cpu>
	I0422 18:04:47.867869   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <os>
	I0422 18:04:47.867880   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <type>hvm</type>
	I0422 18:04:47.867902   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <boot dev='cdrom'/>
	I0422 18:04:47.867914   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <boot dev='hd'/>
	I0422 18:04:47.867938   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <bootmenu enable='no'/>
	I0422 18:04:47.867967   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   </os>
	I0422 18:04:47.867979   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   <devices>
	I0422 18:04:47.867991   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <disk type='file' device='cdrom'>
	I0422 18:04:47.868020   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/boot2docker.iso'/>
	I0422 18:04:47.868037   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <target dev='hdc' bus='scsi'/>
	I0422 18:04:47.868050   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <readonly/>
	I0422 18:04:47.868060   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </disk>
	I0422 18:04:47.868070   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <disk type='file' device='disk'>
	I0422 18:04:47.868083   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 18:04:47.868101   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/kubernetes-upgrade-432126.rawdisk'/>
	I0422 18:04:47.868117   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <target dev='hda' bus='virtio'/>
	I0422 18:04:47.868130   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </disk>
	I0422 18:04:47.868141   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <interface type='network'>
	I0422 18:04:47.868153   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <source network='mk-kubernetes-upgrade-432126'/>
	I0422 18:04:47.868162   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <model type='virtio'/>
	I0422 18:04:47.868171   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </interface>
	I0422 18:04:47.868183   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <interface type='network'>
	I0422 18:04:47.868197   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <source network='default'/>
	I0422 18:04:47.868208   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <model type='virtio'/>
	I0422 18:04:47.868220   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </interface>
	I0422 18:04:47.868228   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <serial type='pty'>
	I0422 18:04:47.868237   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <target port='0'/>
	I0422 18:04:47.868251   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </serial>
	I0422 18:04:47.868261   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <console type='pty'>
	I0422 18:04:47.868269   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <target type='serial' port='0'/>
	I0422 18:04:47.868296   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </console>
	I0422 18:04:47.868308   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     <rng model='virtio'>
	I0422 18:04:47.868339   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)       <backend model='random'>/dev/random</backend>
	I0422 18:04:47.868363   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     </rng>
	I0422 18:04:47.868378   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     
	I0422 18:04:47.868397   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)     
	I0422 18:04:47.868406   54374 main.go:141] libmachine: (kubernetes-upgrade-432126)   </devices>
	I0422 18:04:47.868411   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) </domain>
	I0422 18:04:47.868419   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) 
	I0422 18:04:47.872723   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:fe:16:0c in network default
	I0422 18:04:47.873563   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Ensuring networks are active...
	I0422 18:04:47.873590   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:47.874444   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Ensuring network default is active
	I0422 18:04:47.874731   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Ensuring network mk-kubernetes-upgrade-432126 is active
	I0422 18:04:47.875333   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Getting domain xml...
	I0422 18:04:47.876195   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Creating domain...
	I0422 18:04:49.225895   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Waiting to get IP...
	I0422 18:04:49.227635   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:49.228189   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:49.228217   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:49.228170   56844 retry.go:31] will retry after 308.957978ms: waiting for machine to come up
	I0422 18:04:49.539249   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:49.539782   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:49.539821   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:49.539764   56844 retry.go:31] will retry after 247.616318ms: waiting for machine to come up
	I0422 18:04:49.789218   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:49.789791   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:49.789821   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:49.789703   56844 retry.go:31] will retry after 440.372796ms: waiting for machine to come up
	I0422 18:04:50.231385   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:50.231824   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:50.231854   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:50.231782   56844 retry.go:31] will retry after 550.811644ms: waiting for machine to come up
	I0422 18:04:50.784650   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:50.785146   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:50.785168   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:50.785103   56844 retry.go:31] will retry after 684.108645ms: waiting for machine to come up
	I0422 18:04:51.471027   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:51.471393   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:51.471415   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:51.471300   56844 retry.go:31] will retry after 610.462987ms: waiting for machine to come up
	I0422 18:04:52.083028   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:52.083470   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:52.083499   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:52.083416   56844 retry.go:31] will retry after 1.158142372s: waiting for machine to come up
	I0422 18:04:53.242747   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:53.243185   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:53.243215   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:53.243110   56844 retry.go:31] will retry after 1.321455021s: waiting for machine to come up
	I0422 18:04:54.565971   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:54.566478   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:54.566500   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:54.566449   56844 retry.go:31] will retry after 1.399433539s: waiting for machine to come up
	I0422 18:04:55.967458   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:55.967846   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:55.967876   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:55.967797   56844 retry.go:31] will retry after 2.128215933s: waiting for machine to come up
	I0422 18:04:58.097256   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:04:58.097688   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:04:58.097720   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:04:58.097636   56844 retry.go:31] will retry after 2.553968218s: waiting for machine to come up
	I0422 18:05:00.655089   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:00.655622   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:05:00.655655   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:05:00.655548   56844 retry.go:31] will retry after 2.730880691s: waiting for machine to come up
	I0422 18:05:03.387985   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:03.388402   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:05:03.388434   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:05:03.388340   56844 retry.go:31] will retry after 3.953482238s: waiting for machine to come up
	I0422 18:05:07.343577   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:07.343947   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find current IP address of domain kubernetes-upgrade-432126 in network mk-kubernetes-upgrade-432126
	I0422 18:05:07.343973   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | I0422 18:05:07.343912   56844 retry.go:31] will retry after 4.688506985s: waiting for machine to come up
	I0422 18:05:12.036160   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.036714   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Found IP for machine: 192.168.50.33
	I0422 18:05:12.036735   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Reserving static IP address...
	I0422 18:05:12.036745   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has current primary IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.037101   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-432126", mac: "52:54:00:54:2a:6b", ip: "192.168.50.33"} in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.115656   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Getting to WaitForSSH function...
	I0422 18:05:12.115686   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Reserved static IP address: 192.168.50.33
	I0422 18:05:12.115701   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Waiting for SSH to be available...
	I0422 18:05:12.118843   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.119529   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.119557   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.119684   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Using SSH client type: external
	I0422 18:05:12.119701   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa (-rw-------)
	I0422 18:05:12.119743   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:05:12.119769   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | About to run SSH command:
	I0422 18:05:12.119783   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | exit 0
	I0422 18:05:12.251410   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | SSH cmd err, output: <nil>: 
	I0422 18:05:12.251715   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) KVM machine creation complete!
	I0422 18:05:12.252174   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetConfigRaw
	I0422 18:05:12.252705   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:12.252911   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:12.253076   54374 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 18:05:12.253093   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetState
	I0422 18:05:12.254391   54374 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 18:05:12.254404   54374 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 18:05:12.254409   54374 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 18:05:12.254415   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:12.256938   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.257327   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.257370   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.257440   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:12.257629   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.257816   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.257970   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:12.258151   54374 main.go:141] libmachine: Using SSH client type: native
	I0422 18:05:12.258342   54374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:05:12.258355   54374 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 18:05:12.374762   54374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:05:12.374799   54374 main.go:141] libmachine: Detecting the provisioner...
	I0422 18:05:12.374812   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:12.377839   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.378225   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.378266   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.378527   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:12.378749   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.378927   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.379095   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:12.379303   54374 main.go:141] libmachine: Using SSH client type: native
	I0422 18:05:12.379479   54374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:05:12.379490   54374 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 18:05:12.492296   54374 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 18:05:12.492465   54374 main.go:141] libmachine: found compatible host: buildroot
	I0422 18:05:12.492482   54374 main.go:141] libmachine: Provisioning with buildroot...
	I0422 18:05:12.492494   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetMachineName
	I0422 18:05:12.492760   54374 buildroot.go:166] provisioning hostname "kubernetes-upgrade-432126"
	I0422 18:05:12.492784   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetMachineName
	I0422 18:05:12.492986   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:12.495616   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.495974   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.496013   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.496141   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:12.496320   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.496488   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.496618   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:12.496785   54374 main.go:141] libmachine: Using SSH client type: native
	I0422 18:05:12.496958   54374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:05:12.496971   54374 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-432126 && echo "kubernetes-upgrade-432126" | sudo tee /etc/hostname
	I0422 18:05:12.622028   54374 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-432126
	
	I0422 18:05:12.622056   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:12.624846   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.625196   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.625228   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.625410   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:12.625609   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.625780   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:12.625944   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:12.626156   54374 main.go:141] libmachine: Using SSH client type: native
	I0422 18:05:12.626344   54374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:05:12.626362   54374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-432126' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-432126/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-432126' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:05:12.745091   54374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:05:12.745125   54374 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:05:12.745160   54374 buildroot.go:174] setting up certificates
	I0422 18:05:12.745179   54374 provision.go:84] configureAuth start
	I0422 18:05:12.745192   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetMachineName
	I0422 18:05:12.745547   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetIP
	I0422 18:05:12.748532   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.748992   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.749025   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.749187   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:12.751362   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.751623   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:12.751644   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:12.751768   54374 provision.go:143] copyHostCerts
	I0422 18:05:12.751825   54374 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:05:12.751837   54374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:05:12.751901   54374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:05:12.752045   54374 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:05:12.752058   54374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:05:12.752088   54374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:05:12.752160   54374 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:05:12.752170   54374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:05:12.752195   54374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:05:12.752253   54374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-432126 san=[127.0.0.1 192.168.50.33 kubernetes-upgrade-432126 localhost minikube]
	I0422 18:05:13.069763   54374 provision.go:177] copyRemoteCerts
	I0422 18:05:13.069815   54374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:05:13.069836   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:13.072279   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.072760   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.072792   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.072896   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:13.073074   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.073238   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:13.073393   54374 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:05:13.157768   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:05:13.184013   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0422 18:05:13.210048   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:05:13.238276   54374 provision.go:87] duration metric: took 493.080835ms to configureAuth
	I0422 18:05:13.238318   54374 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:05:13.238611   54374 config.go:182] Loaded profile config "kubernetes-upgrade-432126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:05:13.238733   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:13.242597   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.242969   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.243014   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.243165   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:13.243402   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.243603   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.243743   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:13.243934   54374 main.go:141] libmachine: Using SSH client type: native
	I0422 18:05:13.244160   54374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:05:13.244186   54374 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:05:13.518164   54374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:05:13.518196   54374 main.go:141] libmachine: Checking connection to Docker...
	I0422 18:05:13.518209   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetURL
	I0422 18:05:13.519471   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | Using libvirt version 6000000
	I0422 18:05:13.521620   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.521991   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.522023   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.522108   54374 main.go:141] libmachine: Docker is up and running!
	I0422 18:05:13.522124   54374 main.go:141] libmachine: Reticulating splines...
	I0422 18:05:13.522132   54374 client.go:171] duration metric: took 26.276155091s to LocalClient.Create
	I0422 18:05:13.522157   54374 start.go:167] duration metric: took 26.276235205s to libmachine.API.Create "kubernetes-upgrade-432126"
	I0422 18:05:13.522166   54374 start.go:293] postStartSetup for "kubernetes-upgrade-432126" (driver="kvm2")
	I0422 18:05:13.522178   54374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:05:13.522203   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:13.522445   54374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:05:13.522467   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:13.524690   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.525050   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.525083   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.525246   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:13.525419   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.525577   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:13.525743   54374 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:05:13.609974   54374 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:05:13.614631   54374 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:05:13.614669   54374 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:05:13.614737   54374 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:05:13.614864   54374 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:05:13.614979   54374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:05:13.625348   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:05:13.651271   54374 start.go:296] duration metric: took 129.088056ms for postStartSetup
	I0422 18:05:13.651355   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetConfigRaw
	I0422 18:05:13.651983   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetIP
	I0422 18:05:13.654833   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.655266   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.655293   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.655535   54374 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/config.json ...
	I0422 18:05:13.655763   54374 start.go:128] duration metric: took 26.431281602s to createHost
	I0422 18:05:13.655787   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:13.657867   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.658195   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.658225   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.658411   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:13.658588   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.658746   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.658883   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:13.659019   54374 main.go:141] libmachine: Using SSH client type: native
	I0422 18:05:13.659208   54374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:05:13.659219   54374 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 18:05:13.772223   54374 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713809113.756768729
	
	I0422 18:05:13.772243   54374 fix.go:216] guest clock: 1713809113.756768729
	I0422 18:05:13.772250   54374 fix.go:229] Guest: 2024-04-22 18:05:13.756768729 +0000 UTC Remote: 2024-04-22 18:05:13.655775876 +0000 UTC m=+51.038898267 (delta=100.992853ms)
	I0422 18:05:13.772268   54374 fix.go:200] guest clock delta is within tolerance: 100.992853ms
	I0422 18:05:13.772273   54374 start.go:83] releasing machines lock for "kubernetes-upgrade-432126", held for 26.547972459s
	I0422 18:05:13.772305   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:13.772622   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetIP
	I0422 18:05:13.775484   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.776001   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.776042   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.776190   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:13.776731   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:13.776919   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:05:13.777007   54374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:05:13.777042   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:13.777130   54374 ssh_runner.go:195] Run: cat /version.json
	I0422 18:05:13.777151   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:05:13.779883   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.780085   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.780259   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.780285   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.780523   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:13.780547   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:13.780592   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:13.780711   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:05:13.780782   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.780862   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:05:13.780906   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:13.780954   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:05:13.781016   54374 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:05:13.781058   54374 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:05:13.903713   54374 ssh_runner.go:195] Run: systemctl --version
	I0422 18:05:13.910783   54374 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:05:14.079710   54374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:05:14.086690   54374 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:05:14.086769   54374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:05:14.105334   54374 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:05:14.105357   54374 start.go:494] detecting cgroup driver to use...
	I0422 18:05:14.105439   54374 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:05:14.123913   54374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:05:14.139454   54374 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:05:14.139520   54374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:05:14.154692   54374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:05:14.170019   54374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:05:14.308131   54374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:05:14.461797   54374 docker.go:233] disabling docker service ...
	I0422 18:05:14.461866   54374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:05:14.476658   54374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:05:14.490851   54374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:05:14.631639   54374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:05:14.765255   54374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:05:14.781532   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:05:14.803341   54374 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:05:14.803402   54374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:05:14.814956   54374 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:05:14.815042   54374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:05:14.825935   54374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:05:14.836458   54374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:05:14.847279   54374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:05:14.858757   54374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:05:14.868810   54374 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:05:14.868884   54374 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:05:14.883363   54374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:05:14.893844   54374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:05:15.022102   54374 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:05:15.188708   54374 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:05:15.188825   54374 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:05:15.194515   54374 start.go:562] Will wait 60s for crictl version
	I0422 18:05:15.194616   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:15.198890   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:05:15.239651   54374 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:05:15.239745   54374 ssh_runner.go:195] Run: crio --version
	I0422 18:05:15.274462   54374 ssh_runner.go:195] Run: crio --version
	I0422 18:05:15.310268   54374 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:05:15.311885   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetIP
	I0422 18:05:15.315266   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:15.315684   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:05:03 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:05:15.315712   54374 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:05:15.315957   54374 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:05:15.320657   54374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:05:15.335231   54374 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-432126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:05:15.335325   54374 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:05:15.335369   54374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:05:15.371093   54374 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:05:15.371192   54374 ssh_runner.go:195] Run: which lz4
	I0422 18:05:15.375874   54374 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0422 18:05:15.380759   54374 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:05:15.380797   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:05:17.279547   54374 crio.go:462] duration metric: took 1.903698456s to copy over tarball
	I0422 18:05:17.279632   54374 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:05:20.058357   54374 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.778680869s)
	I0422 18:05:20.058395   54374 crio.go:469] duration metric: took 2.778816005s to extract the tarball
	I0422 18:05:20.058405   54374 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:05:20.102560   54374 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:05:20.161968   54374 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:05:20.162001   54374 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:05:20.162086   54374 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:05:20.162092   54374 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:05:20.162116   54374 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:05:20.162099   54374 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:05:20.162168   54374 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:05:20.162198   54374 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:05:20.162307   54374 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:05:20.162176   54374 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:05:20.163536   54374 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:05:20.163612   54374 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:05:20.163631   54374 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:05:20.163637   54374 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:05:20.163681   54374 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:05:20.163707   54374 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:05:20.163710   54374 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:05:20.163756   54374 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:05:20.404413   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:05:20.429002   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:05:20.468757   54374 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:05:20.468803   54374 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:05:20.468848   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.489896   54374 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:05:20.489942   54374 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:05:20.489953   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:05:20.489977   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.528852   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:05:20.528952   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:05:20.531279   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:05:20.538539   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:05:20.544614   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:05:20.567387   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:05:20.597388   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:05:20.621340   54374 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:05:20.621383   54374 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:05:20.621455   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.663300   54374 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:05:20.663336   54374 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:05:20.663353   54374 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:05:20.663365   54374 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:05:20.663407   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.663407   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.674038   54374 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:05:20.674084   54374 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:05:20.674090   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:05:20.674113   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:05:20.674129   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.674174   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:05:20.690715   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:05:20.755717   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:05:20.760353   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:05:20.760424   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:05:20.760484   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:05:20.800818   54374 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:05:20.800873   54374 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:05:20.800925   54374 ssh_runner.go:195] Run: which crictl
	I0422 18:05:20.809692   54374 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:05:20.809797   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:05:20.848994   54374 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:05:21.410513   54374 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:05:21.564166   54374 cache_images.go:92] duration metric: took 1.402146071s to LoadCachedImages
	W0422 18:05:21.564259   54374 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0422 18:05:21.564278   54374 kubeadm.go:928] updating node { 192.168.50.33 8443 v1.20.0 crio true true} ...
	I0422 18:05:21.564393   54374 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-432126 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:05:21.564478   54374 ssh_runner.go:195] Run: crio config
	I0422 18:05:21.613563   54374 cni.go:84] Creating CNI manager for ""
	I0422 18:05:21.613584   54374 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:05:21.613593   54374 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:05:21.613609   54374 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.33 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-432126 NodeName:kubernetes-upgrade-432126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:05:21.613728   54374 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-432126"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:05:21.613812   54374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:05:21.624758   54374 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:05:21.624831   54374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:05:21.634856   54374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0422 18:05:21.653216   54374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:05:21.672865   54374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:05:21.692972   54374 ssh_runner.go:195] Run: grep 192.168.50.33	control-plane.minikube.internal$ /etc/hosts
	I0422 18:05:21.697210   54374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:05:21.709947   54374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:05:21.848629   54374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:05:21.868451   54374 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126 for IP: 192.168.50.33
	I0422 18:05:21.868478   54374 certs.go:194] generating shared ca certs ...
	I0422 18:05:21.868501   54374 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:21.868684   54374 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:05:21.868756   54374 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:05:21.868769   54374 certs.go:256] generating profile certs ...
	I0422 18:05:21.868861   54374 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.key
	I0422 18:05:21.868880   54374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.crt with IP's: []
	I0422 18:05:22.046858   54374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.crt ...
	I0422 18:05:22.046898   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.crt: {Name:mk5ae31c2a34e14eb5cf64028adb103a3001729b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:22.047096   54374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.key ...
	I0422 18:05:22.047117   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.key: {Name:mkca05f2c9b21ff7cbbf00098dfb6f810b1ee627 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:22.047251   54374 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key.54934615
	I0422 18:05:22.047273   54374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt.54934615 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.33]
	I0422 18:05:22.304710   54374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt.54934615 ...
	I0422 18:05:22.304742   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt.54934615: {Name:mkc102302371b9df6fdf42abf0184d3afe8fcfd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:22.304900   54374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key.54934615 ...
	I0422 18:05:22.304913   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key.54934615: {Name:mk822f0cb79e21a8f900ee974cccf06bbb5d021c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:22.304980   54374 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt.54934615 -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt
	I0422 18:05:22.305048   54374 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key.54934615 -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key
	I0422 18:05:22.305114   54374 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.key
	I0422 18:05:22.305128   54374 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.crt with IP's: []
	I0422 18:05:22.445408   54374 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.crt ...
	I0422 18:05:22.445437   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.crt: {Name:mkad48276eca271cbc734bc69fef0dd26e15e13a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:22.445605   54374 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.key ...
	I0422 18:05:22.445618   54374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.key: {Name:mk441453c298d43b05d2cf7f2f4ec8a0c1873c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:05:22.445783   54374 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:05:22.445823   54374 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:05:22.445833   54374 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:05:22.445852   54374 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:05:22.445875   54374 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:05:22.445895   54374 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:05:22.445929   54374 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:05:22.446566   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:05:22.473991   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:05:22.505283   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:05:22.537043   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:05:22.570227   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 18:05:22.598579   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:05:22.624097   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:05:22.651475   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:05:22.675140   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:05:22.706008   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:05:22.737935   54374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:05:22.770297   54374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:05:22.787581   54374 ssh_runner.go:195] Run: openssl version
	I0422 18:05:22.793422   54374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:05:22.805477   54374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:05:22.810839   54374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:05:22.810933   54374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:05:22.817123   54374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:05:22.830227   54374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:05:22.843570   54374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:05:22.848528   54374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:05:22.848587   54374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:05:22.854518   54374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:05:22.866748   54374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:05:22.878537   54374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:05:22.883302   54374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:05:22.883367   54374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:05:22.889557   54374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:05:22.903155   54374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:05:22.907780   54374 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 18:05:22.907837   54374 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-432126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.33 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:05:22.907897   54374 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:05:22.907945   54374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:05:22.947533   54374 cri.go:89] found id: ""
	I0422 18:05:22.947610   54374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 18:05:22.958673   54374 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:05:22.969326   54374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:05:22.980095   54374 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:05:22.980114   54374 kubeadm.go:156] found existing configuration files:
	
	I0422 18:05:22.980156   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:05:22.990160   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:05:22.990222   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:05:23.000743   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:05:23.010922   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:05:23.010989   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:05:23.021031   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:05:23.031059   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:05:23.031134   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:05:23.041808   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:05:23.052717   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:05:23.052776   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:05:23.063758   54374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:05:23.352634   54374 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:07:21.322939   54374 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:07:21.323232   54374 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:07:21.324716   54374 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:07:21.324818   54374 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:07:21.324990   54374 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:07:21.325205   54374 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:07:21.325417   54374 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:07:21.325555   54374 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:07:21.327955   54374 out.go:204]   - Generating certificates and keys ...
	I0422 18:07:21.328088   54374 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:07:21.328177   54374 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:07:21.328256   54374 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 18:07:21.328332   54374 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 18:07:21.328413   54374 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 18:07:21.328476   54374 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 18:07:21.328541   54374 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 18:07:21.328735   54374 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-432126 localhost] and IPs [192.168.50.33 127.0.0.1 ::1]
	I0422 18:07:21.328877   54374 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 18:07:21.329074   54374 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-432126 localhost] and IPs [192.168.50.33 127.0.0.1 ::1]
	I0422 18:07:21.329159   54374 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 18:07:21.329235   54374 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 18:07:21.329294   54374 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 18:07:21.329360   54374 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:07:21.329417   54374 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:07:21.329476   54374 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:07:21.329563   54374 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:07:21.329624   54374 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:07:21.329739   54374 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:07:21.329843   54374 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:07:21.329893   54374 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:07:21.329977   54374 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:07:21.331984   54374 out.go:204]   - Booting up control plane ...
	I0422 18:07:21.332117   54374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:07:21.332212   54374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:07:21.332313   54374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:07:21.332450   54374 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:07:21.332687   54374 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:07:21.332768   54374 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:07:21.332878   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:07:21.333161   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:07:21.333268   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:07:21.333496   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:07:21.333588   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:07:21.333786   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:07:21.333870   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:07:21.334079   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:07:21.334166   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:07:21.334401   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:07:21.334415   54374 kubeadm.go:309] 
	I0422 18:07:21.334464   54374 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:07:21.334514   54374 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:07:21.334523   54374 kubeadm.go:309] 
	I0422 18:07:21.334564   54374 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:07:21.334607   54374 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:07:21.334727   54374 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:07:21.334739   54374 kubeadm.go:309] 
	I0422 18:07:21.334860   54374 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:07:21.334908   54374 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:07:21.334950   54374 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:07:21.334961   54374 kubeadm.go:309] 
	I0422 18:07:21.335090   54374 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:07:21.335203   54374 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:07:21.335215   54374 kubeadm.go:309] 
	I0422 18:07:21.335336   54374 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:07:21.335463   54374 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:07:21.335555   54374 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:07:21.335646   54374 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0422 18:07:21.335775   54374 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-432126 localhost] and IPs [192.168.50.33 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-432126 localhost] and IPs [192.168.50.33 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-432126 localhost] and IPs [192.168.50.33 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-432126 localhost] and IPs [192.168.50.33 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:07:21.335842   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:07:21.336125   54374 kubeadm.go:309] 
	I0422 18:07:24.443704   54374 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.107836768s)
	I0422 18:07:24.443769   54374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:07:24.462388   54374 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:07:24.475117   54374 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:07:24.475150   54374 kubeadm.go:156] found existing configuration files:
	
	I0422 18:07:24.475197   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:07:24.487535   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:07:24.487588   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:07:24.501264   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:07:24.513394   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:07:24.513483   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:07:24.525745   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:07:24.536738   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:07:24.536813   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:07:24.550873   54374 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:07:24.564120   54374 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:07:24.564188   54374 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:07:24.576319   54374 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:07:24.866981   54374 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:09:20.952660   54374 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:09:20.952795   54374 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:09:20.954655   54374 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:09:20.954719   54374 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:09:20.954838   54374 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:09:20.954943   54374 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:09:20.955029   54374 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:09:20.955103   54374 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:09:20.956964   54374 out.go:204]   - Generating certificates and keys ...
	I0422 18:09:20.957059   54374 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:09:20.957111   54374 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:09:20.957180   54374 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:09:20.957229   54374 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:09:20.957289   54374 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:09:20.957347   54374 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:09:20.957407   54374 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:09:20.957467   54374 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:09:20.957555   54374 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:09:20.957679   54374 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:09:20.957751   54374 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:09:20.957830   54374 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:09:20.957912   54374 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:09:20.957964   54374 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:09:20.958020   54374 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:09:20.958064   54374 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:09:20.958175   54374 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:09:20.958323   54374 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:09:20.958388   54374 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:09:20.958483   54374 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:09:20.960005   54374 out.go:204]   - Booting up control plane ...
	I0422 18:09:20.960134   54374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:09:20.960239   54374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:09:20.960335   54374 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:09:20.960435   54374 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:09:20.960644   54374 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:09:20.960712   54374 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:09:20.960799   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:09:20.961038   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:09:20.961123   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:09:20.961365   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:09:20.961441   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:09:20.961654   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:09:20.961745   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:09:20.961987   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:09:20.962074   54374 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:09:20.962311   54374 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:09:20.962340   54374 kubeadm.go:309] 
	I0422 18:09:20.962396   54374 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:09:20.962449   54374 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:09:20.962455   54374 kubeadm.go:309] 
	I0422 18:09:20.962500   54374 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:09:20.962543   54374 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:09:20.962663   54374 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:09:20.962670   54374 kubeadm.go:309] 
	I0422 18:09:20.962795   54374 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:09:20.962840   54374 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:09:20.962883   54374 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:09:20.962889   54374 kubeadm.go:309] 
	I0422 18:09:20.963025   54374 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:09:20.963152   54374 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:09:20.963159   54374 kubeadm.go:309] 
	I0422 18:09:20.963305   54374 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:09:20.963432   54374 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:09:20.963533   54374 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:09:20.963627   54374 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:09:20.963717   54374 kubeadm.go:393] duration metric: took 3m58.055881431s to StartCluster
	I0422 18:09:20.963773   54374 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:09:20.963839   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:09:20.963939   54374 kubeadm.go:309] 
	I0422 18:09:21.029962   54374 cri.go:89] found id: ""
	I0422 18:09:21.029988   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.029996   54374 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:09:21.030002   54374 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:09:21.030059   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:09:21.070754   54374 cri.go:89] found id: ""
	I0422 18:09:21.070782   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.070793   54374 logs.go:278] No container was found matching "etcd"
	I0422 18:09:21.070800   54374 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:09:21.070866   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:09:21.119615   54374 cri.go:89] found id: ""
	I0422 18:09:21.119642   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.119652   54374 logs.go:278] No container was found matching "coredns"
	I0422 18:09:21.119660   54374 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:09:21.119718   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:09:21.170030   54374 cri.go:89] found id: ""
	I0422 18:09:21.170061   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.170071   54374 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:09:21.170079   54374 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:09:21.170137   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:09:21.209681   54374 cri.go:89] found id: ""
	I0422 18:09:21.209713   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.209721   54374 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:09:21.209727   54374 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:09:21.209788   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:09:21.250285   54374 cri.go:89] found id: ""
	I0422 18:09:21.250315   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.250323   54374 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:09:21.250328   54374 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:09:21.250393   54374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:09:21.307835   54374 cri.go:89] found id: ""
	I0422 18:09:21.307867   54374 logs.go:276] 0 containers: []
	W0422 18:09:21.307878   54374 logs.go:278] No container was found matching "kindnet"
	I0422 18:09:21.307890   54374 logs.go:123] Gathering logs for dmesg ...
	I0422 18:09:21.307916   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:09:21.323845   54374 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:09:21.323878   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:09:21.435415   54374 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:09:21.435438   54374 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:09:21.435455   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:09:21.550831   54374 logs.go:123] Gathering logs for container status ...
	I0422 18:09:21.550879   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:09:21.602131   54374 logs.go:123] Gathering logs for kubelet ...
	I0422 18:09:21.602175   54374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0422 18:09:21.675541   54374 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:09:21.675593   54374 out.go:239] * 
	* 
	W0422 18:09:21.675667   54374 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:09:21.675698   54374 out.go:239] * 
	* 
	W0422 18:09:21.676520   54374 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:09:21.680017   54374 out.go:177] 
	W0422 18:09:21.681458   54374 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:09:21.681538   54374 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:09:21.681563   54374 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:09:21.683295   54374 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-432126
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-432126: (1.928312279s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-432126 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-432126 status --format={{.Host}}: exit status 7 (99.134093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.793706019s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-432126 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (92.96456ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-432126] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-432126
	    minikube start -p kubernetes-upgrade-432126 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4321262 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-432126 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-432126 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.3576259s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-22 18:11:21.061002727 +0000 UTC m=+4465.790616345
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-432126 -n kubernetes-upgrade-432126
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-432126 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-432126 logs -n 25: (1.704631989s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC | 22 Apr 24 18:08 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-759056             | running-upgrade-759056    | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:08 UTC |
	| start   | -p cert-expiration-076896             | cert-expiration-076896    | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p pause-765072                       | pause-765072              | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:08 UTC |
	| start   | -p force-systemd-env-005444           | force-systemd-env-005444  | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:09 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:09 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:09 UTC |
	| stop    | -p kubernetes-upgrade-432126          | kubernetes-upgrade-432126 | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:09 UTC |
	| start   | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:09 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-432126          | kubernetes-upgrade-432126 | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:10 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-005444           | force-systemd-env-005444  | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:09 UTC |
	| start   | -p cert-options-709321                | cert-options-709321       | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:10 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-799191 sudo           | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:09 UTC |
	| start   | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:09 UTC | 22 Apr 24 18:11 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-432126          | kubernetes-upgrade-432126 | jenkins | v1.33.0 | 22 Apr 24 18:10 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-432126          | kubernetes-upgrade-432126 | jenkins | v1.33.0 | 22 Apr 24 18:10 UTC | 22 Apr 24 18:11 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-709321 ssh               | cert-options-709321       | jenkins | v1.33.0 | 22 Apr 24 18:10 UTC | 22 Apr 24 18:10 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-709321 -- sudo        | cert-options-709321       | jenkins | v1.33.0 | 22 Apr 24 18:10 UTC | 22 Apr 24 18:10 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-709321                | cert-options-709321       | jenkins | v1.33.0 | 22 Apr 24 18:10 UTC | 22 Apr 24 18:10 UTC |
	| start   | -p auto-457191 --memory=3072          | auto-457191               | jenkins | v1.33.0 | 22 Apr 24 18:10 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-799191 sudo           | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:11 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-799191                | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:11 UTC | 22 Apr 24 18:11 UTC |
	| start   | -p kindnet-457191                     | kindnet-457191            | jenkins | v1.33.0 | 22 Apr 24 18:11 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:11:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:11:02.318051   62386 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:11:02.318174   62386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:11:02.318185   62386 out.go:304] Setting ErrFile to fd 2...
	I0422 18:11:02.318203   62386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:11:02.318432   62386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:11:02.319020   62386 out.go:298] Setting JSON to false
	I0422 18:11:02.319980   62386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6807,"bootTime":1713802655,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:11:02.320042   62386 start.go:139] virtualization: kvm guest
	I0422 18:11:02.322031   62386 out.go:177] * [kindnet-457191] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:11:02.323623   62386 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:11:02.323589   62386 notify.go:220] Checking for updates...
	I0422 18:11:02.325123   62386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:11:02.326703   62386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:11:02.328157   62386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:11:02.329343   62386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:11:02.330432   62386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:11:02.331963   62386 config.go:182] Loaded profile config "auto-457191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:11:02.332050   62386 config.go:182] Loaded profile config "cert-expiration-076896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:11:02.332136   62386 config.go:182] Loaded profile config "kubernetes-upgrade-432126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:11:02.332230   62386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:11:02.368898   62386 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:11:02.370026   62386 start.go:297] selected driver: kvm2
	I0422 18:11:02.370040   62386 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:11:02.370057   62386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:11:02.370739   62386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:11:02.370824   62386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:11:02.385699   62386 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:11:02.385757   62386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 18:11:02.386025   62386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:11:02.386106   62386 cni.go:84] Creating CNI manager for "kindnet"
	I0422 18:11:02.386123   62386 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0422 18:11:02.386221   62386 start.go:340] cluster config:
	{Name:kindnet-457191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:kindnet-457191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:11:02.386341   62386 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:11:02.388027   62386 out.go:177] * Starting "kindnet-457191" primary control-plane node in "kindnet-457191" cluster
	I0422 18:10:59.343229   62266 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:10:59.343272   62266 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:10:59.343282   62266 cache.go:56] Caching tarball of preloaded images
	I0422 18:10:59.343368   62266 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:10:59.343381   62266 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:10:59.343511   62266 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/config.json ...
	I0422 18:10:59.343533   62266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/config.json: {Name:mk913a64fbfc6a5a14dca0f319fb604af52c9c06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:10:59.343677   62266 start.go:360] acquireMachinesLock for auto-457191: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:11:06.176539   62266 start.go:364] duration metric: took 6.832824839s to acquireMachinesLock for "auto-457191"
	I0422 18:11:06.176625   62266 start.go:93] Provisioning new machine with config: &{Name:auto-457191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:auto-457191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:11:06.176725   62266 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:11:05.932798   61825 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:11:05.932829   61825 machine.go:97] duration metric: took 6.761391151s to provisionDockerMachine
	I0422 18:11:05.932844   61825 start.go:293] postStartSetup for "kubernetes-upgrade-432126" (driver="kvm2")
	I0422 18:11:05.932857   61825 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:11:05.932882   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:11:05.933256   61825 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:11:05.933295   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:11:05.936642   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:05.936978   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:10:00 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:11:05.937010   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:05.937195   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:11:05.937400   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:11:05.937643   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:11:05.937797   61825 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:11:06.022690   61825 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:11:06.027355   61825 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:11:06.027382   61825 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:11:06.027446   61825 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:11:06.027535   61825 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:11:06.027625   61825 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:11:06.038177   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:11:06.064343   61825 start.go:296] duration metric: took 131.483866ms for postStartSetup
	I0422 18:11:06.064391   61825 fix.go:56] duration metric: took 6.919028257s for fixHost
	I0422 18:11:06.064416   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:11:06.067491   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.067827   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:10:00 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:11:06.067859   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.068016   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:11:06.068214   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:11:06.068411   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:11:06.068596   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:11:06.068787   61825 main.go:141] libmachine: Using SSH client type: native
	I0422 18:11:06.068944   61825 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.33 22 <nil> <nil>}
	I0422 18:11:06.068955   61825 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:11:06.176363   61825 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713809466.169865408
	
	I0422 18:11:06.176388   61825 fix.go:216] guest clock: 1713809466.169865408
	I0422 18:11:06.176395   61825 fix.go:229] Guest: 2024-04-22 18:11:06.169865408 +0000 UTC Remote: 2024-04-22 18:11:06.064397083 +0000 UTC m=+34.357903335 (delta=105.468325ms)
	I0422 18:11:06.176434   61825 fix.go:200] guest clock delta is within tolerance: 105.468325ms
	I0422 18:11:06.176441   61825 start.go:83] releasing machines lock for "kubernetes-upgrade-432126", held for 7.031103742s
	I0422 18:11:06.176469   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:11:06.176766   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetIP
	I0422 18:11:06.180107   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.180567   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:10:00 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:11:06.180598   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.180814   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:11:06.181340   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:11:06.181564   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .DriverName
	I0422 18:11:06.181656   61825 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:11:06.181707   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:11:06.181820   61825 ssh_runner.go:195] Run: cat /version.json
	I0422 18:11:06.181847   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHHostname
	I0422 18:11:06.184563   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.184716   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.184859   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:10:00 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:11:06.184887   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.185006   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:10:00 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:11:06.185031   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:06.185205   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:11:06.185209   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHPort
	I0422 18:11:06.185382   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:11:06.185471   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHKeyPath
	I0422 18:11:06.185562   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:11:06.185630   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetSSHUsername
	I0422 18:11:06.185697   61825 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:11:06.185749   61825 sshutil.go:53] new ssh client: &{IP:192.168.50.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/kubernetes-upgrade-432126/id_rsa Username:docker}
	I0422 18:11:06.274472   61825 ssh_runner.go:195] Run: systemctl --version
	I0422 18:11:06.315252   61825 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:11:06.482215   61825 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:11:06.492154   61825 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:11:06.492213   61825 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:11:06.502956   61825 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 18:11:06.502988   61825 start.go:494] detecting cgroup driver to use...
	I0422 18:11:06.503061   61825 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:11:06.531065   61825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:11:06.549088   61825 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:11:06.549141   61825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:11:06.563812   61825 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:11:06.578411   61825 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:11:06.728888   61825 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:11:02.389123   62386 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:11:02.389167   62386 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:11:02.389184   62386 cache.go:56] Caching tarball of preloaded images
	I0422 18:11:02.389266   62386 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:11:02.389280   62386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:11:02.389396   62386 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/config.json ...
	I0422 18:11:02.389418   62386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/config.json: {Name:mk6357a25e346cc334ddf1d5125e0f0398aa122d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:11:02.389559   62386 start.go:360] acquireMachinesLock for kindnet-457191: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:11:06.178688   62266 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0422 18:11:06.178856   62266 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:11:06.178901   62266 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:11:06.199431   62266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
	I0422 18:11:06.199861   62266 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:11:06.200523   62266 main.go:141] libmachine: Using API Version  1
	I0422 18:11:06.200545   62266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:11:06.200918   62266 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:11:06.201139   62266 main.go:141] libmachine: (auto-457191) Calling .GetMachineName
	I0422 18:11:06.201282   62266 main.go:141] libmachine: (auto-457191) Calling .DriverName
	I0422 18:11:06.201453   62266 start.go:159] libmachine.API.Create for "auto-457191" (driver="kvm2")
	I0422 18:11:06.201487   62266 client.go:168] LocalClient.Create starting
	I0422 18:11:06.201526   62266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 18:11:06.201566   62266 main.go:141] libmachine: Decoding PEM data...
	I0422 18:11:06.201582   62266 main.go:141] libmachine: Parsing certificate...
	I0422 18:11:06.201639   62266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 18:11:06.201657   62266 main.go:141] libmachine: Decoding PEM data...
	I0422 18:11:06.201668   62266 main.go:141] libmachine: Parsing certificate...
	I0422 18:11:06.201684   62266 main.go:141] libmachine: Running pre-create checks...
	I0422 18:11:06.201690   62266 main.go:141] libmachine: (auto-457191) Calling .PreCreateCheck
	I0422 18:11:06.202116   62266 main.go:141] libmachine: (auto-457191) Calling .GetConfigRaw
	I0422 18:11:06.202549   62266 main.go:141] libmachine: Creating machine...
	I0422 18:11:06.202571   62266 main.go:141] libmachine: (auto-457191) Calling .Create
	I0422 18:11:06.202709   62266 main.go:141] libmachine: (auto-457191) Creating KVM machine...
	I0422 18:11:06.204199   62266 main.go:141] libmachine: (auto-457191) DBG | found existing default KVM network
	I0422 18:11:06.205814   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:06.205633   62426 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012bda0}
	I0422 18:11:06.205843   62266 main.go:141] libmachine: (auto-457191) DBG | created network xml: 
	I0422 18:11:06.205858   62266 main.go:141] libmachine: (auto-457191) DBG | <network>
	I0422 18:11:06.205866   62266 main.go:141] libmachine: (auto-457191) DBG |   <name>mk-auto-457191</name>
	I0422 18:11:06.205879   62266 main.go:141] libmachine: (auto-457191) DBG |   <dns enable='no'/>
	I0422 18:11:06.205886   62266 main.go:141] libmachine: (auto-457191) DBG |   
	I0422 18:11:06.205897   62266 main.go:141] libmachine: (auto-457191) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0422 18:11:06.205913   62266 main.go:141] libmachine: (auto-457191) DBG |     <dhcp>
	I0422 18:11:06.205925   62266 main.go:141] libmachine: (auto-457191) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0422 18:11:06.205932   62266 main.go:141] libmachine: (auto-457191) DBG |     </dhcp>
	I0422 18:11:06.205946   62266 main.go:141] libmachine: (auto-457191) DBG |   </ip>
	I0422 18:11:06.205954   62266 main.go:141] libmachine: (auto-457191) DBG |   
	I0422 18:11:06.205965   62266 main.go:141] libmachine: (auto-457191) DBG | </network>
	I0422 18:11:06.205974   62266 main.go:141] libmachine: (auto-457191) DBG | 
	I0422 18:11:06.212044   62266 main.go:141] libmachine: (auto-457191) DBG | trying to create private KVM network mk-auto-457191 192.168.39.0/24...
	I0422 18:11:06.285477   62266 main.go:141] libmachine: (auto-457191) DBG | private KVM network mk-auto-457191 192.168.39.0/24 created
	I0422 18:11:06.285599   62266 main.go:141] libmachine: (auto-457191) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191 ...
	I0422 18:11:06.285638   62266 main.go:141] libmachine: (auto-457191) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 18:11:06.285652   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:06.285487   62426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:11:06.285671   62266 main.go:141] libmachine: (auto-457191) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 18:11:06.528182   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:06.528045   62426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191/id_rsa...
	I0422 18:11:06.633533   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:06.633371   62426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191/auto-457191.rawdisk...
	I0422 18:11:06.633573   62266 main.go:141] libmachine: (auto-457191) DBG | Writing magic tar header
	I0422 18:11:06.633589   62266 main.go:141] libmachine: (auto-457191) DBG | Writing SSH key tar header
	I0422 18:11:06.633642   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:06.633486   62426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191 ...
	I0422 18:11:06.633659   62266 main.go:141] libmachine: (auto-457191) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191 (perms=drwx------)
	I0422 18:11:06.633672   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191
	I0422 18:11:06.633684   62266 main.go:141] libmachine: (auto-457191) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 18:11:06.633703   62266 main.go:141] libmachine: (auto-457191) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 18:11:06.633720   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 18:11:06.633740   62266 main.go:141] libmachine: (auto-457191) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 18:11:06.633756   62266 main.go:141] libmachine: (auto-457191) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 18:11:06.633766   62266 main.go:141] libmachine: (auto-457191) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 18:11:06.633780   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:11:06.633792   62266 main.go:141] libmachine: (auto-457191) Creating domain...
	I0422 18:11:06.633803   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 18:11:06.633834   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 18:11:06.633850   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home/jenkins
	I0422 18:11:06.633859   62266 main.go:141] libmachine: (auto-457191) DBG | Checking permissions on dir: /home
	I0422 18:11:06.633872   62266 main.go:141] libmachine: (auto-457191) DBG | Skipping /home - not owner
	I0422 18:11:06.634857   62266 main.go:141] libmachine: (auto-457191) define libvirt domain using xml: 
	I0422 18:11:06.634876   62266 main.go:141] libmachine: (auto-457191) <domain type='kvm'>
	I0422 18:11:06.634882   62266 main.go:141] libmachine: (auto-457191)   <name>auto-457191</name>
	I0422 18:11:06.634889   62266 main.go:141] libmachine: (auto-457191)   <memory unit='MiB'>3072</memory>
	I0422 18:11:06.634894   62266 main.go:141] libmachine: (auto-457191)   <vcpu>2</vcpu>
	I0422 18:11:06.634901   62266 main.go:141] libmachine: (auto-457191)   <features>
	I0422 18:11:06.634931   62266 main.go:141] libmachine: (auto-457191)     <acpi/>
	I0422 18:11:06.634953   62266 main.go:141] libmachine: (auto-457191)     <apic/>
	I0422 18:11:06.634962   62266 main.go:141] libmachine: (auto-457191)     <pae/>
	I0422 18:11:06.634976   62266 main.go:141] libmachine: (auto-457191)     
	I0422 18:11:06.634987   62266 main.go:141] libmachine: (auto-457191)   </features>
	I0422 18:11:06.634999   62266 main.go:141] libmachine: (auto-457191)   <cpu mode='host-passthrough'>
	I0422 18:11:06.635007   62266 main.go:141] libmachine: (auto-457191)   
	I0422 18:11:06.635017   62266 main.go:141] libmachine: (auto-457191)   </cpu>
	I0422 18:11:06.635025   62266 main.go:141] libmachine: (auto-457191)   <os>
	I0422 18:11:06.635036   62266 main.go:141] libmachine: (auto-457191)     <type>hvm</type>
	I0422 18:11:06.635046   62266 main.go:141] libmachine: (auto-457191)     <boot dev='cdrom'/>
	I0422 18:11:06.635054   62266 main.go:141] libmachine: (auto-457191)     <boot dev='hd'/>
	I0422 18:11:06.635064   62266 main.go:141] libmachine: (auto-457191)     <bootmenu enable='no'/>
	I0422 18:11:06.635074   62266 main.go:141] libmachine: (auto-457191)   </os>
	I0422 18:11:06.635082   62266 main.go:141] libmachine: (auto-457191)   <devices>
	I0422 18:11:06.635093   62266 main.go:141] libmachine: (auto-457191)     <disk type='file' device='cdrom'>
	I0422 18:11:06.635110   62266 main.go:141] libmachine: (auto-457191)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191/boot2docker.iso'/>
	I0422 18:11:06.635139   62266 main.go:141] libmachine: (auto-457191)       <target dev='hdc' bus='scsi'/>
	I0422 18:11:06.635148   62266 main.go:141] libmachine: (auto-457191)       <readonly/>
	I0422 18:11:06.635157   62266 main.go:141] libmachine: (auto-457191)     </disk>
	I0422 18:11:06.635167   62266 main.go:141] libmachine: (auto-457191)     <disk type='file' device='disk'>
	I0422 18:11:06.635189   62266 main.go:141] libmachine: (auto-457191)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 18:11:06.635205   62266 main.go:141] libmachine: (auto-457191)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/auto-457191/auto-457191.rawdisk'/>
	I0422 18:11:06.635215   62266 main.go:141] libmachine: (auto-457191)       <target dev='hda' bus='virtio'/>
	I0422 18:11:06.635224   62266 main.go:141] libmachine: (auto-457191)     </disk>
	I0422 18:11:06.635234   62266 main.go:141] libmachine: (auto-457191)     <interface type='network'>
	I0422 18:11:06.635252   62266 main.go:141] libmachine: (auto-457191)       <source network='mk-auto-457191'/>
	I0422 18:11:06.635263   62266 main.go:141] libmachine: (auto-457191)       <model type='virtio'/>
	I0422 18:11:06.635272   62266 main.go:141] libmachine: (auto-457191)     </interface>
	I0422 18:11:06.635283   62266 main.go:141] libmachine: (auto-457191)     <interface type='network'>
	I0422 18:11:06.635294   62266 main.go:141] libmachine: (auto-457191)       <source network='default'/>
	I0422 18:11:06.635305   62266 main.go:141] libmachine: (auto-457191)       <model type='virtio'/>
	I0422 18:11:06.635313   62266 main.go:141] libmachine: (auto-457191)     </interface>
	I0422 18:11:06.635323   62266 main.go:141] libmachine: (auto-457191)     <serial type='pty'>
	I0422 18:11:06.635334   62266 main.go:141] libmachine: (auto-457191)       <target port='0'/>
	I0422 18:11:06.635346   62266 main.go:141] libmachine: (auto-457191)     </serial>
	I0422 18:11:06.635358   62266 main.go:141] libmachine: (auto-457191)     <console type='pty'>
	I0422 18:11:06.635366   62266 main.go:141] libmachine: (auto-457191)       <target type='serial' port='0'/>
	I0422 18:11:06.635377   62266 main.go:141] libmachine: (auto-457191)     </console>
	I0422 18:11:06.635386   62266 main.go:141] libmachine: (auto-457191)     <rng model='virtio'>
	I0422 18:11:06.635398   62266 main.go:141] libmachine: (auto-457191)       <backend model='random'>/dev/random</backend>
	I0422 18:11:06.635408   62266 main.go:141] libmachine: (auto-457191)     </rng>
	I0422 18:11:06.635415   62266 main.go:141] libmachine: (auto-457191)     
	I0422 18:11:06.635434   62266 main.go:141] libmachine: (auto-457191)     
	I0422 18:11:06.635447   62266 main.go:141] libmachine: (auto-457191)   </devices>
	I0422 18:11:06.635456   62266 main.go:141] libmachine: (auto-457191) </domain>
	I0422 18:11:06.635465   62266 main.go:141] libmachine: (auto-457191) 
	I0422 18:11:06.639850   62266 main.go:141] libmachine: (auto-457191) DBG | domain auto-457191 has defined MAC address 52:54:00:d1:be:03 in network default
	I0422 18:11:06.640421   62266 main.go:141] libmachine: (auto-457191) Ensuring networks are active...
	I0422 18:11:06.640448   62266 main.go:141] libmachine: (auto-457191) DBG | domain auto-457191 has defined MAC address 52:54:00:6d:c5:16 in network mk-auto-457191
	I0422 18:11:06.641195   62266 main.go:141] libmachine: (auto-457191) Ensuring network default is active
	I0422 18:11:06.641633   62266 main.go:141] libmachine: (auto-457191) Ensuring network mk-auto-457191 is active
	I0422 18:11:06.642380   62266 main.go:141] libmachine: (auto-457191) Getting domain xml...
	I0422 18:11:06.643307   62266 main.go:141] libmachine: (auto-457191) Creating domain...
	I0422 18:11:07.868407   62266 main.go:141] libmachine: (auto-457191) Waiting to get IP...
	I0422 18:11:07.869380   62266 main.go:141] libmachine: (auto-457191) DBG | domain auto-457191 has defined MAC address 52:54:00:6d:c5:16 in network mk-auto-457191
	I0422 18:11:07.869860   62266 main.go:141] libmachine: (auto-457191) DBG | unable to find current IP address of domain auto-457191 in network mk-auto-457191
	I0422 18:11:07.869882   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:07.869822   62426 retry.go:31] will retry after 246.358838ms: waiting for machine to come up
	I0422 18:11:08.118396   62266 main.go:141] libmachine: (auto-457191) DBG | domain auto-457191 has defined MAC address 52:54:00:6d:c5:16 in network mk-auto-457191
	I0422 18:11:08.119061   62266 main.go:141] libmachine: (auto-457191) DBG | unable to find current IP address of domain auto-457191 in network mk-auto-457191
	I0422 18:11:08.119112   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:08.119030   62426 retry.go:31] will retry after 323.589043ms: waiting for machine to come up
	I0422 18:11:08.444914   62266 main.go:141] libmachine: (auto-457191) DBG | domain auto-457191 has defined MAC address 52:54:00:6d:c5:16 in network mk-auto-457191
	I0422 18:11:08.445487   62266 main.go:141] libmachine: (auto-457191) DBG | unable to find current IP address of domain auto-457191 in network mk-auto-457191
	I0422 18:11:08.445517   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:08.445437   62426 retry.go:31] will retry after 365.75685ms: waiting for machine to come up
	I0422 18:11:08.813084   62266 main.go:141] libmachine: (auto-457191) DBG | domain auto-457191 has defined MAC address 52:54:00:6d:c5:16 in network mk-auto-457191
	I0422 18:11:08.813704   62266 main.go:141] libmachine: (auto-457191) DBG | unable to find current IP address of domain auto-457191 in network mk-auto-457191
	I0422 18:11:08.813730   62266 main.go:141] libmachine: (auto-457191) DBG | I0422 18:11:08.813658   62426 retry.go:31] will retry after 450.747391ms: waiting for machine to come up
	I0422 18:11:06.872221   61825 docker.go:233] disabling docker service ...
	I0422 18:11:06.872292   61825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:11:06.891092   61825 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:11:06.905219   61825 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:11:07.046248   61825 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:11:07.222929   61825 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:11:07.238098   61825 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:11:07.262762   61825 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:11:07.262824   61825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.277092   61825 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:11:07.277158   61825 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.289436   61825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.304891   61825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.316254   61825 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:11:07.327996   61825 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.339329   61825 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.353584   61825 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:11:07.364593   61825 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:11:07.375052   61825 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:11:07.386431   61825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:11:07.541302   61825 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:11:10.391541   61825 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.850170681s)
	I0422 18:11:10.391584   61825 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:11:10.391640   61825 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:11:10.396765   61825 start.go:562] Will wait 60s for crictl version
	I0422 18:11:10.396826   61825 ssh_runner.go:195] Run: which crictl
	I0422 18:11:10.400957   61825 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:11:10.443865   61825 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:11:10.443959   61825 ssh_runner.go:195] Run: crio --version
	I0422 18:11:10.477064   61825 ssh_runner.go:195] Run: crio --version
	I0422 18:11:10.509748   61825 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:11:10.511415   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) Calling .GetIP
	I0422 18:11:10.514555   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:10.514919   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:2a:6b", ip: ""} in network mk-kubernetes-upgrade-432126: {Iface:virbr2 ExpiryTime:2024-04-22 19:10:00 +0000 UTC Type:0 Mac:52:54:00:54:2a:6b Iaid: IPaddr:192.168.50.33 Prefix:24 Hostname:kubernetes-upgrade-432126 Clientid:01:52:54:00:54:2a:6b}
	I0422 18:11:10.514947   61825 main.go:141] libmachine: (kubernetes-upgrade-432126) DBG | domain kubernetes-upgrade-432126 has defined IP address 192.168.50.33 and MAC address 52:54:00:54:2a:6b in network mk-kubernetes-upgrade-432126
	I0422 18:11:10.515206   61825 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:11:10.521148   61825 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-432126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.33 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:11:10.521278   61825 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:11:10.521333   61825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:11:10.576118   61825 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:11:10.576142   61825 crio.go:433] Images already preloaded, skipping extraction
	I0422 18:11:10.576202   61825 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:11:10.612569   61825 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:11:10.612601   61825 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:11:10.612610   61825 kubeadm.go:928] updating node { 192.168.50.33 8443 v1.30.0 crio true true} ...
	I0422 18:11:10.612716   61825 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-432126 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:11:10.612782   61825 ssh_runner.go:195] Run: crio config
	I0422 18:11:10.663747   61825 cni.go:84] Creating CNI manager for ""
	I0422 18:11:10.663775   61825 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:11:10.663788   61825 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:11:10.663817   61825 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.33 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-432126 NodeName:kubernetes-upgrade-432126 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:11:10.663981   61825 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-432126"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:11:10.664048   61825 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:11:10.675311   61825 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:11:10.675388   61825 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:11:10.685975   61825 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0422 18:11:10.705231   61825 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:11:10.725081   61825 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0422 18:11:10.744255   61825 ssh_runner.go:195] Run: grep 192.168.50.33	control-plane.minikube.internal$ /etc/hosts
	I0422 18:11:10.748772   61825 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:11:10.900384   61825 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:11:10.916653   61825 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126 for IP: 192.168.50.33
	I0422 18:11:10.916680   61825 certs.go:194] generating shared ca certs ...
	I0422 18:11:10.916699   61825 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:11:10.916868   61825 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:11:10.916921   61825 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:11:10.916939   61825 certs.go:256] generating profile certs ...
	I0422 18:11:10.917012   61825 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/client.key
	I0422 18:11:10.917062   61825 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key.54934615
	I0422 18:11:10.917093   61825 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.key
	I0422 18:11:10.917197   61825 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:11:10.917225   61825 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:11:10.917233   61825 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:11:10.917253   61825 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:11:10.917273   61825 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:11:10.917293   61825 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:11:10.917326   61825 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:11:10.917906   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:11:10.945437   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:11:10.971368   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:11:10.999052   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:11:11.026730   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0422 18:11:11.054577   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:11:11.083439   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:11:11.113445   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kubernetes-upgrade-432126/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:11:11.143281   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:11:11.170488   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:11:11.199278   61825 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:11:11.227946   61825 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:11:11.251161   61825 ssh_runner.go:195] Run: openssl version
	I0422 18:11:11.259329   61825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:11:11.272345   61825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:11:11.277317   61825 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:11:11.277382   61825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:11:11.284115   61825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:11:11.295618   61825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:11:11.308553   61825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:11:11.314297   61825 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:11:11.314389   61825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:11:11.320906   61825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:11:11.332763   61825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:11:11.345296   61825 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:11:11.350204   61825 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:11:11.350265   61825 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:11:11.356829   61825 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:11:11.367620   61825 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:11:11.373213   61825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:11:11.379976   61825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:11:11.386282   61825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:11:11.393082   61825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:11:11.399426   61825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:11:11.405771   61825 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:11:11.412211   61825 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-432126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0 ClusterName:kubernetes-upgrade-432126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.33 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:11:11.412296   61825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:11:11.412371   61825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:11:11.460420   61825 cri.go:89] found id: "ac401dd4263db4725be84e32454a441264bd2d07f8eec1fbf07342401444c57d"
	I0422 18:11:11.460458   61825 cri.go:89] found id: "fa34dff71dd13bc788c1076f391141dae45844a11b7552bea1297944e12def4c"
	I0422 18:11:11.460465   61825 cri.go:89] found id: "193c21405fd8aecb35fd74e0cefa06fe9b9cd3c15dffb34c917c8ffb8f581d32"
	I0422 18:11:11.460488   61825 cri.go:89] found id: "5bfc6b03808addc210609a045b5ae58dbda4b756b59b96ca62978bc066e798c6"
	I0422 18:11:11.460495   61825 cri.go:89] found id: "9dc9b55e39720cbb1e775a0a0181c0b0a266573c7b1e6ce603902e1223c378cf"
	I0422 18:11:11.460500   61825 cri.go:89] found id: "804999b472d9cb86da2265b42570041a2b3a9f62877bb29b81497d754ae87b78"
	I0422 18:11:11.460504   61825 cri.go:89] found id: "4430ef7ed675ec83ab46daae7912c1030f0797de8455a9396bcf209c776c78a9"
	I0422 18:11:11.460508   61825 cri.go:89] found id: "bc4afcc13b459f5b13fef180c9db5e249ef343a3ffa1ab7ed4c02eb9210eb698"
	I0422 18:11:11.460511   61825 cri.go:89] found id: ""
	I0422 18:11:11.460558   61825 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.803373048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809481803350839,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb8913f6-6fd0-43f4-8013-a319a0d55ae3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.804209355Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d23fb1e1-9112-4189-9c92-43ccd3df4f20 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.804271960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d23fb1e1-9112-4189-9c92-43ccd3df4f20 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.804891047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:efb476283c7cb8811efa8be61e8a6272ec38c04af954012e578a88eb173b8892,PodSandboxId:50060c587ab2ae734db7504a37f7fd07c3799d3d67aff32868058eabc94c2166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479110345135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ca6d56367e756ace81cc476382576f74ed8a7ab80e399cc62fc7c3aa36091d,PodSandboxId:d75d641563e4ae67345d913906a8a0740313adedc9672800dfdfed95d621faf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479007726021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da271892b148bb4ba6765467ea3c5eb124aa8894f667766a183a4984c19a6276,PodSandboxId:91f7f86c410e00e6fb82586e45ad5b5111ecf3099989be1c6e782356256f2e31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713809478494847150,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1511b6b467f8f1e9d45ace7894443647b7784fae06a28cc841d116282d21b3b4,PodSandboxId:33339ebde3f5d3972bcd0d963505f07396c48384ebc6fd0ceac258d8297853ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3809478472996694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da688746c8c9713bad225e04e9b92f82a9aaa12a7821a3582fc01ee739c543e,PodSandboxId:1e56b6b71d98cc486e98d427a2afffeec1b3bf3672ef382fe3bb7dfc15a7205b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17138094747
50127188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ed9dc5563a7ec6678de26c600a856d02b6ecf9822fd1c9de2b9fcb8762de9e,PodSandboxId:4b17dd5e678b81c7fce5d0dfe01e1e0026c9ff6176493d8602784c42e57052cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedA
t:1713809474721987843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5dc423541e6c9a6b0c9730d26c54dcfa1b8c63929a862bf3429949eaa43d7e5,PodSandboxId:f5437d64bb68b7dc23af6af9ecbd10a8c0c30099a8a5a4479c24e49afffea883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809474708
866399,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1724c1f9564057b57d607c645d869de834cd0677f8ccf335fa3bee917948317b,PodSandboxId:e3e59af373ab3a4ac3a9aecd72e3d618f895a03215d8ff00efb6bdd63ffb27dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809474627662742,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac401dd4263db4725be84e32454a441264bd2d07f8eec1fbf07342401444c57d,PodSandboxId:ef126314cd654159502e6223aa4a5b5d51ded44fc81e39a550eee063e97861e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441419750830,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa34dff71dd13bc788c1076f391141dae45844a11b7552bea1297944e12def4c,PodSandboxId:72b39d370114c2e40cb4556e4b6ae7f6e77baa30e89894ca510e61af18bfa5a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441378887716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193c21405fd8aecb35fd74e0cefa06fe9b9cd3c15dffb34c917c8ffb8f581d32,PodSandboxId:c386ac3d9235c623d408871018ed19718057668840f01c
140e73145534ddcb46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809440657022187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfc6b03808addc210609a045b5ae58dbda4b756b59b96ca62978bc066e798c6,PodSandboxId:31e67d6c8a09767128a16ffe3ddb127178f29705a19b384230f9f87eb017a2c5,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713809440186112712,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc9b55e39720cbb1e775a0a0181c0b0a266573c7b1e6ce603902e1223c378cf,PodSandboxId:36e9259cf9ef251a235a27d4e8aa3300602957f7d738242d8f6951b7ca2e4bc6,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809420859049696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804999b472d9cb86da2265b42570041a2b3a9f62877bb29b81497d754ae87b78,PodSandboxId:30962e01770bbef819321b0b72d5605111a0df423ea52c097004c78348c20673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809420802339205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4430ef7ed675ec83ab46daae7912c1030f0797de8455a9396bcf209c776c78a9,PodSandboxId:bec605955051c80e177be201827fabf8298bd7a30e9e8db712056be9e39c7a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809420801812558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4afcc13b459f5b13fef180c9db5e249ef343a3ffa1ab7ed4c02eb9210eb698,PodSandboxId:bad73bbbdd73b1e7f99f0dcd2a8931f7a0ac02c5081ebf3d74c86ae10188b320,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809420717934472,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d23fb1e1-9112-4189-9c92-43ccd3df4f20 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.857815182Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84554d80-3af9-4a2a-83fd-9eb3c0b4bd16 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.857896267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84554d80-3af9-4a2a-83fd-9eb3c0b4bd16 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.859633372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5cde2bc6-07f4-4608-9a74-7d97914ef3f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.860398715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809481860257330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5cde2bc6-07f4-4608-9a74-7d97914ef3f7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.861416054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76c11e2b-1ac7-484b-9ffd-4a5c2237a204 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.861527573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76c11e2b-1ac7-484b-9ffd-4a5c2237a204 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.862050527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:efb476283c7cb8811efa8be61e8a6272ec38c04af954012e578a88eb173b8892,PodSandboxId:50060c587ab2ae734db7504a37f7fd07c3799d3d67aff32868058eabc94c2166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479110345135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ca6d56367e756ace81cc476382576f74ed8a7ab80e399cc62fc7c3aa36091d,PodSandboxId:d75d641563e4ae67345d913906a8a0740313adedc9672800dfdfed95d621faf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479007726021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da271892b148bb4ba6765467ea3c5eb124aa8894f667766a183a4984c19a6276,PodSandboxId:91f7f86c410e00e6fb82586e45ad5b5111ecf3099989be1c6e782356256f2e31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713809478494847150,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1511b6b467f8f1e9d45ace7894443647b7784fae06a28cc841d116282d21b3b4,PodSandboxId:33339ebde3f5d3972bcd0d963505f07396c48384ebc6fd0ceac258d8297853ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3809478472996694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da688746c8c9713bad225e04e9b92f82a9aaa12a7821a3582fc01ee739c543e,PodSandboxId:1e56b6b71d98cc486e98d427a2afffeec1b3bf3672ef382fe3bb7dfc15a7205b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17138094747
50127188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ed9dc5563a7ec6678de26c600a856d02b6ecf9822fd1c9de2b9fcb8762de9e,PodSandboxId:4b17dd5e678b81c7fce5d0dfe01e1e0026c9ff6176493d8602784c42e57052cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedA
t:1713809474721987843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5dc423541e6c9a6b0c9730d26c54dcfa1b8c63929a862bf3429949eaa43d7e5,PodSandboxId:f5437d64bb68b7dc23af6af9ecbd10a8c0c30099a8a5a4479c24e49afffea883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809474708
866399,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1724c1f9564057b57d607c645d869de834cd0677f8ccf335fa3bee917948317b,PodSandboxId:e3e59af373ab3a4ac3a9aecd72e3d618f895a03215d8ff00efb6bdd63ffb27dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809474627662742,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac401dd4263db4725be84e32454a441264bd2d07f8eec1fbf07342401444c57d,PodSandboxId:ef126314cd654159502e6223aa4a5b5d51ded44fc81e39a550eee063e97861e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441419750830,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa34dff71dd13bc788c1076f391141dae45844a11b7552bea1297944e12def4c,PodSandboxId:72b39d370114c2e40cb4556e4b6ae7f6e77baa30e89894ca510e61af18bfa5a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441378887716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193c21405fd8aecb35fd74e0cefa06fe9b9cd3c15dffb34c917c8ffb8f581d32,PodSandboxId:c386ac3d9235c623d408871018ed19718057668840f01c
140e73145534ddcb46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809440657022187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfc6b03808addc210609a045b5ae58dbda4b756b59b96ca62978bc066e798c6,PodSandboxId:31e67d6c8a09767128a16ffe3ddb127178f29705a19b384230f9f87eb017a2c5,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713809440186112712,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc9b55e39720cbb1e775a0a0181c0b0a266573c7b1e6ce603902e1223c378cf,PodSandboxId:36e9259cf9ef251a235a27d4e8aa3300602957f7d738242d8f6951b7ca2e4bc6,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809420859049696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804999b472d9cb86da2265b42570041a2b3a9f62877bb29b81497d754ae87b78,PodSandboxId:30962e01770bbef819321b0b72d5605111a0df423ea52c097004c78348c20673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809420802339205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4430ef7ed675ec83ab46daae7912c1030f0797de8455a9396bcf209c776c78a9,PodSandboxId:bec605955051c80e177be201827fabf8298bd7a30e9e8db712056be9e39c7a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809420801812558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4afcc13b459f5b13fef180c9db5e249ef343a3ffa1ab7ed4c02eb9210eb698,PodSandboxId:bad73bbbdd73b1e7f99f0dcd2a8931f7a0ac02c5081ebf3d74c86ae10188b320,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809420717934472,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76c11e2b-1ac7-484b-9ffd-4a5c2237a204 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.910542642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17113514-4fb2-41a4-9fee-50fd135bc240 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.910646224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17113514-4fb2-41a4-9fee-50fd135bc240 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.912722992Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22596a4c-7ec0-4d32-a472-16d62440e7c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.913632543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809481913594613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22596a4c-7ec0-4d32-a472-16d62440e7c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.914653270Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37f76eec-4385-41d4-8cda-1555aa431def name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.914728379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37f76eec-4385-41d4-8cda-1555aa431def name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.918018858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:efb476283c7cb8811efa8be61e8a6272ec38c04af954012e578a88eb173b8892,PodSandboxId:50060c587ab2ae734db7504a37f7fd07c3799d3d67aff32868058eabc94c2166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479110345135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ca6d56367e756ace81cc476382576f74ed8a7ab80e399cc62fc7c3aa36091d,PodSandboxId:d75d641563e4ae67345d913906a8a0740313adedc9672800dfdfed95d621faf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479007726021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da271892b148bb4ba6765467ea3c5eb124aa8894f667766a183a4984c19a6276,PodSandboxId:91f7f86c410e00e6fb82586e45ad5b5111ecf3099989be1c6e782356256f2e31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713809478494847150,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1511b6b467f8f1e9d45ace7894443647b7784fae06a28cc841d116282d21b3b4,PodSandboxId:33339ebde3f5d3972bcd0d963505f07396c48384ebc6fd0ceac258d8297853ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3809478472996694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da688746c8c9713bad225e04e9b92f82a9aaa12a7821a3582fc01ee739c543e,PodSandboxId:1e56b6b71d98cc486e98d427a2afffeec1b3bf3672ef382fe3bb7dfc15a7205b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17138094747
50127188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ed9dc5563a7ec6678de26c600a856d02b6ecf9822fd1c9de2b9fcb8762de9e,PodSandboxId:4b17dd5e678b81c7fce5d0dfe01e1e0026c9ff6176493d8602784c42e57052cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedA
t:1713809474721987843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5dc423541e6c9a6b0c9730d26c54dcfa1b8c63929a862bf3429949eaa43d7e5,PodSandboxId:f5437d64bb68b7dc23af6af9ecbd10a8c0c30099a8a5a4479c24e49afffea883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809474708
866399,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1724c1f9564057b57d607c645d869de834cd0677f8ccf335fa3bee917948317b,PodSandboxId:e3e59af373ab3a4ac3a9aecd72e3d618f895a03215d8ff00efb6bdd63ffb27dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809474627662742,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac401dd4263db4725be84e32454a441264bd2d07f8eec1fbf07342401444c57d,PodSandboxId:ef126314cd654159502e6223aa4a5b5d51ded44fc81e39a550eee063e97861e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441419750830,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa34dff71dd13bc788c1076f391141dae45844a11b7552bea1297944e12def4c,PodSandboxId:72b39d370114c2e40cb4556e4b6ae7f6e77baa30e89894ca510e61af18bfa5a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441378887716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193c21405fd8aecb35fd74e0cefa06fe9b9cd3c15dffb34c917c8ffb8f581d32,PodSandboxId:c386ac3d9235c623d408871018ed19718057668840f01c
140e73145534ddcb46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809440657022187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfc6b03808addc210609a045b5ae58dbda4b756b59b96ca62978bc066e798c6,PodSandboxId:31e67d6c8a09767128a16ffe3ddb127178f29705a19b384230f9f87eb017a2c5,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713809440186112712,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc9b55e39720cbb1e775a0a0181c0b0a266573c7b1e6ce603902e1223c378cf,PodSandboxId:36e9259cf9ef251a235a27d4e8aa3300602957f7d738242d8f6951b7ca2e4bc6,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809420859049696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804999b472d9cb86da2265b42570041a2b3a9f62877bb29b81497d754ae87b78,PodSandboxId:30962e01770bbef819321b0b72d5605111a0df423ea52c097004c78348c20673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809420802339205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4430ef7ed675ec83ab46daae7912c1030f0797de8455a9396bcf209c776c78a9,PodSandboxId:bec605955051c80e177be201827fabf8298bd7a30e9e8db712056be9e39c7a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809420801812558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4afcc13b459f5b13fef180c9db5e249ef343a3ffa1ab7ed4c02eb9210eb698,PodSandboxId:bad73bbbdd73b1e7f99f0dcd2a8931f7a0ac02c5081ebf3d74c86ae10188b320,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809420717934472,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37f76eec-4385-41d4-8cda-1555aa431def name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.960231263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=467b3144-ac67-4983-872f-9caf212e5139 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.960329297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=467b3144-ac67-4983-872f-9caf212e5139 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.961665562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=84eeb27d-aaab-4a85-a887-1d30b38a24af name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.962142119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809481962115371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=84eeb27d-aaab-4a85-a887-1d30b38a24af name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.963844042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfdddc34-7630-4519-a44c-4c5a06e22e6d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.963963987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfdddc34-7630-4519-a44c-4c5a06e22e6d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:11:21 kubernetes-upgrade-432126 crio[2282]: time="2024-04-22 18:11:21.964730942Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:efb476283c7cb8811efa8be61e8a6272ec38c04af954012e578a88eb173b8892,PodSandboxId:50060c587ab2ae734db7504a37f7fd07c3799d3d67aff32868058eabc94c2166,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479110345135,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ca6d56367e756ace81cc476382576f74ed8a7ab80e399cc62fc7c3aa36091d,PodSandboxId:d75d641563e4ae67345d913906a8a0740313adedc9672800dfdfed95d621faf1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809479007726021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da271892b148bb4ba6765467ea3c5eb124aa8894f667766a183a4984c19a6276,PodSandboxId:91f7f86c410e00e6fb82586e45ad5b5111ecf3099989be1c6e782356256f2e31,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAIN
ER_RUNNING,CreatedAt:1713809478494847150,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1511b6b467f8f1e9d45ace7894443647b7784fae06a28cc841d116282d21b3b4,PodSandboxId:33339ebde3f5d3972bcd0d963505f07396c48384ebc6fd0ceac258d8297853ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3809478472996694,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2da688746c8c9713bad225e04e9b92f82a9aaa12a7821a3582fc01ee739c543e,PodSandboxId:1e56b6b71d98cc486e98d427a2afffeec1b3bf3672ef382fe3bb7dfc15a7205b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:17138094747
50127188,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97ed9dc5563a7ec6678de26c600a856d02b6ecf9822fd1c9de2b9fcb8762de9e,PodSandboxId:4b17dd5e678b81c7fce5d0dfe01e1e0026c9ff6176493d8602784c42e57052cf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedA
t:1713809474721987843,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5dc423541e6c9a6b0c9730d26c54dcfa1b8c63929a862bf3429949eaa43d7e5,PodSandboxId:f5437d64bb68b7dc23af6af9ecbd10a8c0c30099a8a5a4479c24e49afffea883,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809474708
866399,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1724c1f9564057b57d607c645d869de834cd0677f8ccf335fa3bee917948317b,PodSandboxId:e3e59af373ab3a4ac3a9aecd72e3d618f895a03215d8ff00efb6bdd63ffb27dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809474627662742,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac401dd4263db4725be84e32454a441264bd2d07f8eec1fbf07342401444c57d,PodSandboxId:ef126314cd654159502e6223aa4a5b5d51ded44fc81e39a550eee063e97861e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441419750830,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-85l2j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bf4d-3b3b-444b-a70c-a96ccd4c73a6,},Annotations:map[string]string{io.kubernetes.container.hash: 87d750f3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa34dff71dd13bc788c1076f391141dae45844a11b7552bea1297944e12def4c,PodSandboxId:72b39d370114c2e40cb4556e4b6ae7f6e77baa30e89894ca510e61af18bfa5a2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809441378887716,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-q52fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8c2617e-f505-4495-b727-f7b30e6c5961,},Annotations:map[string]string{io.kubernetes.container.hash: d53b72a1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:193c21405fd8aecb35fd74e0cefa06fe9b9cd3c15dffb34c917c8ffb8f581d32,PodSandboxId:c386ac3d9235c623d408871018ed19718057668840f01c
140e73145534ddcb46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809440657022187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fcpkb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29d40e54-368c-4818-8938-936b7e6468e0,},Annotations:map[string]string{io.kubernetes.container.hash: d7584468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bfc6b03808addc210609a045b5ae58dbda4b756b59b96ca62978bc066e798c6,PodSandboxId:31e67d6c8a09767128a16ffe3ddb127178f29705a19b384230f9f87eb017a2c5,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713809440186112712,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63ef2b1e-da54-4c89-ae8c-a73f22864840,},Annotations:map[string]string{io.kubernetes.container.hash: ee838698,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc9b55e39720cbb1e775a0a0181c0b0a266573c7b1e6ce603902e1223c378cf,PodSandboxId:36e9259cf9ef251a235a27d4e8aa3300602957f7d738242d8f6951b7ca2e4bc6,Metadata:&ContainerMetadata{
Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809420859049696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17b6a80e0449faf83d2eab6f3cda5fff,},Annotations:map[string]string{io.kubernetes.container.hash: 7aaca069,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804999b472d9cb86da2265b42570041a2b3a9f62877bb29b81497d754ae87b78,PodSandboxId:30962e01770bbef819321b0b72d5605111a0df423ea52c097004c78348c20673,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Imag
e:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809420802339205,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 401455fe8f5f1267a6caa6bf1f6ef966,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4430ef7ed675ec83ab46daae7912c1030f0797de8455a9396bcf209c776c78a9,PodSandboxId:bec605955051c80e177be201827fabf8298bd7a30e9e8db712056be9e39c7a20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809420801812558,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea5a1f50ca8063ea591067e862abe05a,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4afcc13b459f5b13fef180c9db5e249ef343a3ffa1ab7ed4c02eb9210eb698,PodSandboxId:bad73bbbdd73b1e7f99f0dcd2a8931f7a0ac02c5081ebf3d74c86ae10188b320,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809420717934472,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-432126,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12a6dc0db4a281f9d1b3bc4352ec9228,},Annotations:map[string]string{io.kubernetes.container.hash: 43f455a5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfdddc34-7630-4519-a44c-4c5a06e22e6d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	efb476283c7cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   50060c587ab2a       coredns-7db6d8ff4d-85l2j
	57ca6d56367e7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   d75d641563e4a       coredns-7db6d8ff4d-q52fd
	da271892b148b       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   3 seconds ago        Running             kube-proxy                1                   91f7f86c410e0       kube-proxy-fcpkb
	1511b6b467f8f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       1                   33339ebde3f5d       storage-provisioner
	2da688746c8c9       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   7 seconds ago        Running             kube-controller-manager   1                   1e56b6b71d98c       kube-controller-manager-kubernetes-upgrade-432126
	97ed9dc5563a7       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   7 seconds ago        Running             kube-apiserver            1                   4b17dd5e678b8       kube-apiserver-kubernetes-upgrade-432126
	f5dc423541e6c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago        Running             etcd                      1                   f5437d64bb68b       etcd-kubernetes-upgrade-432126
	1724c1f956405       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   7 seconds ago        Running             kube-scheduler            1                   e3e59af373ab3       kube-scheduler-kubernetes-upgrade-432126
	ac401dd4263db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago       Exited              coredns                   0                   ef126314cd654       coredns-7db6d8ff4d-85l2j
	fa34dff71dd13       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago       Exited              coredns                   0                   72b39d370114c       coredns-7db6d8ff4d-q52fd
	193c21405fd8a       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   41 seconds ago       Exited              kube-proxy                0                   c386ac3d9235c       kube-proxy-fcpkb
	5bfc6b03808ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   41 seconds ago       Exited              storage-provisioner       0                   31e67d6c8a097       storage-provisioner
	9dc9b55e39720       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   36e9259cf9ef2       etcd-kubernetes-upgrade-432126
	804999b472d9c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   About a minute ago   Exited              kube-scheduler            0                   30962e01770bb       kube-scheduler-kubernetes-upgrade-432126
	4430ef7ed675e       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   About a minute ago   Exited              kube-controller-manager   0                   bec605955051c       kube-controller-manager-kubernetes-upgrade-432126
	bc4afcc13b459       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   About a minute ago   Exited              kube-apiserver            0                   bad73bbbdd73b       kube-apiserver-kubernetes-upgrade-432126
	
	
	==> coredns [57ca6d56367e756ace81cc476382576f74ed8a7ab80e399cc62fc7c3aa36091d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ac401dd4263db4725be84e32454a441264bd2d07f8eec1fbf07342401444c57d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [efb476283c7cb8811efa8be61e8a6272ec38c04af954012e578a88eb173b8892] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fa34dff71dd13bc788c1076f391141dae45844a11b7552bea1297944e12def4c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-432126
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-432126
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:10:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-432126
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:11:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:11:18 +0000   Mon, 22 Apr 2024 18:10:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:11:18 +0000   Mon, 22 Apr 2024 18:10:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:11:18 +0000   Mon, 22 Apr 2024 18:10:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:11:18 +0000   Mon, 22 Apr 2024 18:10:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.33
	  Hostname:    kubernetes-upgrade-432126
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 925a3e6487764f049f5433253177d296
	  System UUID:                925a3e64-8776-4f04-9f54-33253177d296
	  Boot ID:                    aef79b73-04f2-4da6-ac76-7ccab5ac95c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-85l2j                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     43s
	  kube-system                 coredns-7db6d8ff4d-q52fd                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     43s
	  kube-system                 etcd-kubernetes-upgrade-432126                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         56s
	  kube-system                 kube-apiserver-kubernetes-upgrade-432126             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-432126    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-proxy-fcpkb                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-kubernetes-upgrade-432126             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 63s)  kubelet          Node kubernetes-upgrade-432126 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 63s)  kubelet          Node kubernetes-upgrade-432126 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 63s)  kubelet          Node kubernetes-upgrade-432126 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           44s                node-controller  Node kubernetes-upgrade-432126 event: Registered Node kubernetes-upgrade-432126 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.860236] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.056364] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055277] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.198821] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.152760] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.327954] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +4.758683] systemd-fstab-generator[732]: Ignoring "noauto" option for root device
	[  +0.069536] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.129502] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[ +11.245834] systemd-fstab-generator[1247]: Ignoring "noauto" option for root device
	[  +0.076916] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.180812] kauditd_printk_skb: 21 callbacks suppressed
	[Apr22 18:11] systemd-fstab-generator[2201]: Ignoring "noauto" option for root device
	[  +0.078336] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.066874] systemd-fstab-generator[2213]: Ignoring "noauto" option for root device
	[  +0.173513] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +0.166564] systemd-fstab-generator[2239]: Ignoring "noauto" option for root device
	[  +0.320701] systemd-fstab-generator[2267]: Ignoring "noauto" option for root device
	[  +3.360615] systemd-fstab-generator[2422]: Ignoring "noauto" option for root device
	[  +2.884095] systemd-fstab-generator[2547]: Ignoring "noauto" option for root device
	[  +0.089432] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.112473] kauditd_printk_skb: 98 callbacks suppressed
	[  +1.125872] systemd-fstab-generator[3452]: Ignoring "noauto" option for root device
	
	
	==> etcd [9dc9b55e39720cbb1e775a0a0181c0b0a266573c7b1e6ce603902e1223c378cf] <==
	{"level":"info","ts":"2024-04-22T18:10:21.815736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:10:21.816005Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:10:21.816262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:10:21.82294Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:10:21.823011Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"48a667124f087be6","local-member-id":"9612aa3e8bd8b9e4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:10:21.838417Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:10:21.838504Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:10:21.869095Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.33:2379"}
	{"level":"info","ts":"2024-04-22T18:10:21.823229Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:10:21.894213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2024-04-22T18:10:44.887854Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.443226ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-22T18:10:44.888098Z","caller":"traceutil/trace.go:171","msg":"trace[2000363347] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:389; }","duration":"181.747096ms","start":"2024-04-22T18:10:44.706305Z","end":"2024-04-22T18:10:44.888052Z","steps":["trace[2000363347] 'range keys from in-memory index tree'  (duration: 181.353211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-22T18:10:45.788129Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.478963ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13394988451941817747 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.33\" mod_revision:297 > success:<request_put:<key:\"/registry/masterleases/192.168.50.33\" value_size:66 lease:4171616415087041937 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.33\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-04-22T18:10:45.788282Z","caller":"traceutil/trace.go:171","msg":"trace[745369869] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"309.561698ms","start":"2024-04-22T18:10:45.478704Z","end":"2024-04-22T18:10:45.788266Z","steps":["trace[745369869] 'process raft request'  (duration: 132.221549ms)","trace[745369869] 'compare'  (duration: 176.310284ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T18:10:45.78835Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-22T18:10:45.478678Z","time spent":"309.640342ms","remote":"127.0.0.1:37332","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.33\" mod_revision:297 > success:<request_put:<key:\"/registry/masterleases/192.168.50.33\" value_size:66 lease:4171616415087041937 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.33\" > >"}
	{"level":"info","ts":"2024-04-22T18:11:00.051888Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T18:11:00.052014Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-432126","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.33:2380"],"advertise-client-urls":["https://192.168.50.33:2379"]}
	{"level":"warn","ts":"2024-04-22T18:11:00.052091Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:11:00.052197Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:11:00.098749Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.33:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:11:00.098865Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.33:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T18:11:00.09898Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9612aa3e8bd8b9e4","current-leader-member-id":"9612aa3e8bd8b9e4"}
	{"level":"info","ts":"2024-04-22T18:11:00.104052Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.33:2380"}
	{"level":"info","ts":"2024-04-22T18:11:00.104629Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.33:2380"}
	{"level":"info","ts":"2024-04-22T18:11:00.104687Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-432126","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.33:2380"],"advertise-client-urls":["https://192.168.50.33:2379"]}
	
	
	==> etcd [f5dc423541e6c9a6b0c9730d26c54dcfa1b8c63929a862bf3429949eaa43d7e5] <==
	{"level":"info","ts":"2024-04-22T18:11:15.138383Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T18:11:15.138397Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T18:11:15.144557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 switched to configuration voters=(10813892840880912868)"}
	{"level":"info","ts":"2024-04-22T18:11:15.144666Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"48a667124f087be6","local-member-id":"9612aa3e8bd8b9e4","added-peer-id":"9612aa3e8bd8b9e4","added-peer-peer-urls":["https://192.168.50.33:2380"]}
	{"level":"info","ts":"2024-04-22T18:11:15.144802Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"48a667124f087be6","local-member-id":"9612aa3e8bd8b9e4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:11:15.144854Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:11:15.149192Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:11:15.149435Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9612aa3e8bd8b9e4","initial-advertise-peer-urls":["https://192.168.50.33:2380"],"listen-peer-urls":["https://192.168.50.33:2380"],"advertise-client-urls":["https://192.168.50.33:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.33:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:11:15.149529Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:11:15.149613Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.33:2380"}
	{"level":"info","ts":"2024-04-22T18:11:15.14964Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.33:2380"}
	{"level":"info","ts":"2024-04-22T18:11:16.401782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T18:11:16.401846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:11:16.401895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 received MsgPreVoteResp from 9612aa3e8bd8b9e4 at term 2"}
	{"level":"info","ts":"2024-04-22T18:11:16.401919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T18:11:16.401947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 received MsgVoteResp from 9612aa3e8bd8b9e4 at term 3"}
	{"level":"info","ts":"2024-04-22T18:11:16.401959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9612aa3e8bd8b9e4 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T18:11:16.401966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9612aa3e8bd8b9e4 elected leader 9612aa3e8bd8b9e4 at term 3"}
	{"level":"info","ts":"2024-04-22T18:11:16.407772Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9612aa3e8bd8b9e4","local-member-attributes":"{Name:kubernetes-upgrade-432126 ClientURLs:[https://192.168.50.33:2379]}","request-path":"/0/members/9612aa3e8bd8b9e4/attributes","cluster-id":"48a667124f087be6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:11:16.407934Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:11:16.408097Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:11:16.41003Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.33:2379"}
	{"level":"info","ts":"2024-04-22T18:11:16.411731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:11:16.418086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:11:16.418148Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:11:22 up 1 min,  0 users,  load average: 0.54, 0.20, 0.07
	Linux kubernetes-upgrade-432126 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [97ed9dc5563a7ec6678de26c600a856d02b6ecf9822fd1c9de2b9fcb8762de9e] <==
	I0422 18:11:17.819973       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0422 18:11:17.820053       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0422 18:11:17.920213       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 18:11:17.924250       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 18:11:17.928602       1 aggregator.go:165] initial CRD sync complete...
	I0422 18:11:17.935842       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 18:11:17.935867       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 18:11:17.935891       1 cache.go:39] Caches are synced for autoregister controller
	I0422 18:11:17.951423       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 18:11:17.951631       1 policy_source.go:224] refreshing policies
	I0422 18:11:17.989409       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 18:11:17.990305       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 18:11:17.990345       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 18:11:17.990504       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 18:11:17.990547       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 18:11:18.005144       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 18:11:18.011300       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 18:11:18.035670       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 18:11:18.691726       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 18:11:18.826299       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0422 18:11:19.701762       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 18:11:19.715713       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 18:11:19.764948       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 18:11:19.870799       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 18:11:19.881085       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [bc4afcc13b459f5b13fef180c9db5e249ef343a3ffa1ab7ed4c02eb9210eb698] <==
	W0422 18:11:00.064931       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.070801       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.079265       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0422 18:11:00.081646       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0422 18:11:00.082326       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	W0422 18:11:00.083656       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0422 18:11:00.089886       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	W0422 18:11:00.092596       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.092678       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.092715       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.092781       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.092818       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.092898       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.092963       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093024       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093096       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093187       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093254       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093290       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093356       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093398       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.093886       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.094036       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.094126       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:11:00.094164       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [2da688746c8c9713bad225e04e9b92f82a9aaa12a7821a3582fc01ee739c543e] <==
	I0422 18:11:20.033175       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0422 18:11:20.033433       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0422 18:11:20.033529       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0422 18:11:20.036154       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0422 18:11:20.043718       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0422 18:11:20.043800       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0422 18:11:20.068044       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0422 18:11:20.068338       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0422 18:11:20.068599       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0422 18:11:20.155090       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0422 18:11:20.155138       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0422 18:11:20.155147       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0422 18:11:20.195584       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0422 18:11:20.195609       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0422 18:11:20.195676       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0422 18:11:20.196214       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0422 18:11:20.196229       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0422 18:11:20.196252       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0422 18:11:20.196379       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0422 18:11:20.196754       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0422 18:11:20.196766       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0422 18:11:20.196795       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0422 18:11:20.196431       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0422 18:11:20.197830       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0422 18:11:20.197916       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	
	
	==> kube-controller-manager [4430ef7ed675ec83ab46daae7912c1030f0797de8455a9396bcf209c776c78a9] <==
	I0422 18:10:38.960559       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0422 18:10:38.960630       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0422 18:10:38.960731       1 shared_informer.go:320] Caches are synced for persistent volume
	I0422 18:10:38.960831       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0422 18:10:38.984701       1 shared_informer.go:320] Caches are synced for disruption
	I0422 18:10:39.016817       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 18:10:39.035765       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 18:10:39.064417       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 18:10:39.106988       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0422 18:10:39.111022       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0422 18:10:39.111079       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0422 18:10:39.111149       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0422 18:10:39.161651       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0422 18:10:39.557560       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 18:10:39.557624       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0422 18:10:39.602199       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 18:10:39.657740       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="316.886483ms"
	I0422 18:10:39.693549       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.709647ms"
	I0422 18:10:39.693651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.638µs"
	I0422 18:10:39.698198       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.957µs"
	I0422 18:10:42.057876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.636µs"
	I0422 18:10:42.101250       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.9861ms"
	I0422 18:10:42.102863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="209.447µs"
	I0422 18:10:42.146226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.141836ms"
	I0422 18:10:42.147125       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.125µs"
	
	
	==> kube-proxy [193c21405fd8aecb35fd74e0cefa06fe9b9cd3c15dffb34c917c8ffb8f581d32] <==
	I0422 18:10:40.838226       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:10:40.856558       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.33"]
	I0422 18:10:40.997228       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:10:40.997371       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:10:40.997399       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:10:41.001729       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:10:41.002821       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:10:41.002869       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:10:41.008591       1 config.go:192] "Starting service config controller"
	I0422 18:10:41.008606       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:10:41.008629       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:10:41.008633       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:10:41.009289       1 config.go:319] "Starting node config controller"
	I0422 18:10:41.009298       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:10:41.109405       1 shared_informer.go:320] Caches are synced for node config
	I0422 18:10:41.109527       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:10:41.109569       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [da271892b148bb4ba6765467ea3c5eb124aa8894f667766a183a4984c19a6276] <==
	I0422 18:11:18.823780       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:11:18.865784       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.33"]
	I0422 18:11:18.959513       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:11:18.959582       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:11:18.959602       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:11:18.964117       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:11:18.964329       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:11:18.964378       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:11:18.966294       1 config.go:192] "Starting service config controller"
	I0422 18:11:18.966332       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:11:18.966361       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:11:18.966365       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:11:18.968565       1 config.go:319] "Starting node config controller"
	I0422 18:11:18.968698       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:11:19.067323       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:11:19.067424       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:11:19.068864       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1724c1f9564057b57d607c645d869de834cd0677f8ccf335fa3bee917948317b] <==
	I0422 18:11:16.351953       1 serving.go:380] Generated self-signed cert in-memory
	W0422 18:11:17.841541       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:11:17.841650       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:11:17.841661       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:11:17.841747       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:11:17.915206       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 18:11:17.915360       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:11:17.918249       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 18:11:17.918290       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:11:17.918549       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 18:11:17.918765       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:11:18.019863       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [804999b472d9cb86da2265b42570041a2b3a9f62877bb29b81497d754ae87b78] <==
	E0422 18:10:24.651053       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 18:10:24.671155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 18:10:24.671375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 18:10:24.672724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:10:24.672924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:10:24.682378       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 18:10:24.682517       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 18:10:24.701971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 18:10:24.703031       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 18:10:24.703992       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:10:24.704081       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:10:24.738377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 18:10:24.738582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 18:10:24.747597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 18:10:24.747699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 18:10:24.757753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 18:10:24.757886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 18:10:24.822708       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:10:24.822846       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:10:24.825410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 18:10:24.825572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 18:10:24.914797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:10:24.914894       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0422 18:10:27.627090       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0422 18:11:00.059285       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: E0422 18:11:14.247719    2554 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.33:8443: connect: connection refused" node="kubernetes-upgrade-432126"
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: E0422 18:11:14.410981    2554 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.50.33:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-432126.17c8aca2e9e31eb4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-432126,UID:kubernetes-upgrade-432126,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-432126,},FirstTimestamp:2024-04-22 18:11:13.923129012 +0000 UTC m=+0.119639538,LastTimestamp:2024-04-22 18:11:13.923129012 +0000 UTC m=+0.119639538,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:kubernetes-upgrade-432
126,}"
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: E0422 18:11:14.544881    2554 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-432126?timeout=10s\": dial tcp 192.168.50.33:8443: connect: connection refused" interval="800ms"
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:14.649672    2554 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-432126"
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: E0422 18:11:14.651142    2554 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.33:8443: connect: connection refused" node="kubernetes-upgrade-432126"
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: W0422 18:11:14.887822    2554 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.33:8443: connect: connection refused
	Apr 22 18:11:14 kubernetes-upgrade-432126 kubelet[2554]: E0422 18:11:14.887868    2554 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.33:8443: connect: connection refused
	Apr 22 18:11:15 kubernetes-upgrade-432126 kubelet[2554]: W0422 18:11:15.079690    2554 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-432126&limit=500&resourceVersion=0": dial tcp 192.168.50.33:8443: connect: connection refused
	Apr 22 18:11:15 kubernetes-upgrade-432126 kubelet[2554]: E0422 18:11:15.079752    2554 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-432126&limit=500&resourceVersion=0": dial tcp 192.168.50.33:8443: connect: connection refused
	Apr 22 18:11:15 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:15.453310    2554 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-432126"
	Apr 22 18:11:17 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:17.928390    2554 apiserver.go:52] "Watching apiserver"
	Apr 22 18:11:17 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:17.954313    2554 topology_manager.go:215] "Topology Admit Handler" podUID="63ef2b1e-da54-4c89-ae8c-a73f22864840" podNamespace="kube-system" podName="storage-provisioner"
	Apr 22 18:11:17 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:17.954801    2554 topology_manager.go:215] "Topology Admit Handler" podUID="5162bf4d-3b3b-444b-a70c-a96ccd4c73a6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-85l2j"
	Apr 22 18:11:17 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:17.955118    2554 topology_manager.go:215] "Topology Admit Handler" podUID="c8c2617e-f505-4495-b727-f7b30e6c5961" podNamespace="kube-system" podName="coredns-7db6d8ff4d-q52fd"
	Apr 22 18:11:17 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:17.955356    2554 topology_manager.go:215] "Topology Admit Handler" podUID="29d40e54-368c-4818-8938-936b7e6468e0" podNamespace="kube-system" podName="kube-proxy-fcpkb"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.037880    2554 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.068999    2554 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-432126"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.069106    2554 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-432126"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.070653    2554 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.071560    2554 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.090899    2554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/63ef2b1e-da54-4c89-ae8c-a73f22864840-tmp\") pod \"storage-provisioner\" (UID: \"63ef2b1e-da54-4c89-ae8c-a73f22864840\") " pod="kube-system/storage-provisioner"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.090954    2554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29d40e54-368c-4818-8938-936b7e6468e0-xtables-lock\") pod \"kube-proxy-fcpkb\" (UID: \"29d40e54-368c-4818-8938-936b7e6468e0\") " pod="kube-system/kube-proxy-fcpkb"
	Apr 22 18:11:18 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:18.091005    2554 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29d40e54-368c-4818-8938-936b7e6468e0-lib-modules\") pod \"kube-proxy-fcpkb\" (UID: \"29d40e54-368c-4818-8938-936b7e6468e0\") " pod="kube-system/kube-proxy-fcpkb"
	Apr 22 18:11:21 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:21.204866    2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Apr 22 18:11:22 kubernetes-upgrade-432126 kubelet[2554]: I0422 18:11:22.629728    2554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [1511b6b467f8f1e9d45ace7894443647b7784fae06a28cc841d116282d21b3b4] <==
	I0422 18:11:18.656982       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:11:18.684097       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:11:18.684160       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:11:18.704959       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:11:18.705133       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-432126_40b653c8-80c3-479a-8076-2b9ae37129af!
	I0422 18:11:18.705247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b20ec603-a8e8-44a2-94ef-89e73c0af336", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-432126_40b653c8-80c3-479a-8076-2b9ae37129af became leader
	I0422 18:11:18.810494       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-432126_40b653c8-80c3-479a-8076-2b9ae37129af!
	
	
	==> storage-provisioner [5bfc6b03808addc210609a045b5ae58dbda4b756b59b96ca62978bc066e798c6] <==
	I0422 18:10:40.313882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:11:21.412293   62681 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-432126 -n kubernetes-upgrade-432126
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-432126 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-432126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-432126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-432126: (1.118938684s)
--- FAIL: TestKubernetesUpgrade (421.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-765072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-765072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.384903576s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-765072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-765072" primary control-plane node in "pause-765072" cluster
	* Updating the running kvm2 "pause-765072" VM ...
	* Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-765072" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:07:23.969004   58839 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:07:23.969198   58839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:23.969211   58839 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:23.969218   58839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:23.969526   58839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:07:23.970246   58839 out.go:298] Setting JSON to false
	I0422 18:07:23.971476   58839 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6589,"bootTime":1713802655,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:07:23.971535   58839 start.go:139] virtualization: kvm guest
	I0422 18:07:23.974882   58839 out.go:177] * [pause-765072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:07:23.976412   58839 notify.go:220] Checking for updates...
	I0422 18:07:23.978048   58839 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:07:23.979684   58839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:07:23.981497   58839 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:07:23.983101   58839 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:07:23.984943   58839 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:07:23.986734   58839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:07:23.988988   58839 config.go:182] Loaded profile config "pause-765072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:07:23.989608   58839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:07:23.989674   58839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:07:24.011278   58839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0422 18:07:24.012044   58839 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:07:24.012773   58839 main.go:141] libmachine: Using API Version  1
	I0422 18:07:24.012798   58839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:07:24.013214   58839 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:07:24.013528   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:24.013857   58839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:07:24.014299   58839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:07:24.014347   58839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:07:24.032120   58839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35971
	I0422 18:07:24.032583   58839 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:07:24.033018   58839 main.go:141] libmachine: Using API Version  1
	I0422 18:07:24.033035   58839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:07:24.033466   58839 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:07:24.033637   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:24.152010   58839 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:07:24.172510   58839 start.go:297] selected driver: kvm2
	I0422 18:07:24.172535   58839 start.go:901] validating driver "kvm2" against &{Name:pause-765072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.0 ClusterName:pause-765072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:24.172705   58839 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:07:24.173236   58839 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:07:24.173371   58839 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:07:24.193771   58839 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:07:24.194474   58839 cni.go:84] Creating CNI manager for ""
	I0422 18:07:24.194491   58839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:07:24.194552   58839 start.go:340] cluster config:
	{Name:pause-765072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:pause-765072 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:24.194688   58839 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:07:24.252233   58839 out.go:177] * Starting "pause-765072" primary control-plane node in "pause-765072" cluster
	I0422 18:07:24.254427   58839 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:07:24.254480   58839 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:07:24.254489   58839 cache.go:56] Caching tarball of preloaded images
	I0422 18:07:24.254574   58839 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:07:24.254584   58839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:07:24.254690   58839 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/config.json ...
	I0422 18:07:24.254873   58839 start.go:360] acquireMachinesLock for pause-765072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:07:24.254919   58839 start.go:364] duration metric: took 25.283µs to acquireMachinesLock for "pause-765072"
	I0422 18:07:24.254938   58839 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:07:24.254948   58839 fix.go:54] fixHost starting: 
	I0422 18:07:24.255308   58839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:07:24.255356   58839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:07:24.273978   58839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0422 18:07:24.274391   58839 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:07:24.274894   58839 main.go:141] libmachine: Using API Version  1
	I0422 18:07:24.274918   58839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:07:24.275228   58839 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:07:24.275420   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:24.275596   58839 main.go:141] libmachine: (pause-765072) Calling .GetState
	I0422 18:07:24.277349   58839 fix.go:112] recreateIfNeeded on pause-765072: state=Running err=<nil>
	W0422 18:07:24.277405   58839 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:07:24.330629   58839 out.go:177] * Updating the running kvm2 "pause-765072" VM ...
	I0422 18:07:24.332248   58839 machine.go:94] provisionDockerMachine start ...
	I0422 18:07:24.332297   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:24.332618   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:24.336109   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.336590   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:24.336616   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.336786   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:24.336984   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.337148   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.337321   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:24.337490   58839 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:24.337774   58839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.61 22 <nil> <nil>}
	I0422 18:07:24.337794   58839 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:07:24.484455   58839 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-765072
	
	I0422 18:07:24.484486   58839 main.go:141] libmachine: (pause-765072) Calling .GetMachineName
	I0422 18:07:24.484851   58839 buildroot.go:166] provisioning hostname "pause-765072"
	I0422 18:07:24.484879   58839 main.go:141] libmachine: (pause-765072) Calling .GetMachineName
	I0422 18:07:24.485112   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:24.488944   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.489007   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:24.489026   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.489383   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:24.489717   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.489907   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.490070   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:24.490282   58839 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:24.490441   58839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.61 22 <nil> <nil>}
	I0422 18:07:24.490453   58839 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-765072 && echo "pause-765072" | sudo tee /etc/hostname
	I0422 18:07:24.635520   58839 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-765072
	
	I0422 18:07:24.635556   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:24.639479   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.640126   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:24.640159   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.640467   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:24.640748   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.641003   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.641241   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:24.641462   58839 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:24.641683   58839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.61 22 <nil> <nil>}
	I0422 18:07:24.641706   58839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-765072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-765072/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-765072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:07:24.774213   58839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:07:24.774247   58839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:07:24.774309   58839 buildroot.go:174] setting up certificates
	I0422 18:07:24.774329   58839 provision.go:84] configureAuth start
	I0422 18:07:24.774354   58839 main.go:141] libmachine: (pause-765072) Calling .GetMachineName
	I0422 18:07:24.774706   58839 main.go:141] libmachine: (pause-765072) Calling .GetIP
	I0422 18:07:24.778091   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.778560   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:24.778599   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.778809   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:24.781710   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.782213   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:24.782253   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.782490   58839 provision.go:143] copyHostCerts
	I0422 18:07:24.782553   58839 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:07:24.782567   58839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:07:24.782652   58839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:07:24.782767   58839 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:07:24.782781   58839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:07:24.782815   58839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:07:24.782889   58839 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:07:24.782902   58839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:07:24.782925   58839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:07:24.782984   58839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.pause-765072 san=[127.0.0.1 192.168.61.61 localhost minikube pause-765072]
	I0422 18:07:24.976706   58839 provision.go:177] copyRemoteCerts
	I0422 18:07:24.976795   58839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:07:24.976833   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:24.980313   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.980892   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:24.980935   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:24.981292   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:24.981548   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:24.981742   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:24.981941   58839 sshutil.go:53] new ssh client: &{IP:192.168.61.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/pause-765072/id_rsa Username:docker}
	I0422 18:07:25.095076   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:07:25.134607   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0422 18:07:25.170892   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0422 18:07:25.205823   58839 provision.go:87] duration metric: took 431.470788ms to configureAuth
	I0422 18:07:25.205869   58839 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:07:25.206142   58839 config.go:182] Loaded profile config "pause-765072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:07:25.206241   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:25.209144   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:25.209490   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:25.209517   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:25.209762   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:25.209997   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:25.210160   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:25.210355   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:25.210563   58839 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:25.210777   58839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.61 22 <nil> <nil>}
	I0422 18:07:25.210799   58839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:07:31.544090   58839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:07:31.544135   58839 machine.go:97] duration metric: took 7.211842109s to provisionDockerMachine
	I0422 18:07:31.544148   58839 start.go:293] postStartSetup for "pause-765072" (driver="kvm2")
	I0422 18:07:31.544161   58839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:07:31.544214   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:31.544540   58839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:07:31.544568   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:31.547482   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.547818   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:31.547848   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.548027   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:31.548222   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:31.548395   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:31.548533   58839 sshutil.go:53] new ssh client: &{IP:192.168.61.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/pause-765072/id_rsa Username:docker}
	I0422 18:07:31.639790   58839 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:07:31.644848   58839 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:07:31.644879   58839 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:07:31.644945   58839 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:07:31.645049   58839 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:07:31.645171   58839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:07:31.656939   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:07:31.683085   58839 start.go:296] duration metric: took 138.92134ms for postStartSetup
	I0422 18:07:31.683176   58839 fix.go:56] duration metric: took 7.428227397s for fixHost
	I0422 18:07:31.683221   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:31.685903   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.686308   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:31.686339   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.686498   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:31.686709   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:31.686900   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:31.687080   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:31.687260   58839 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:31.687473   58839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.61 22 <nil> <nil>}
	I0422 18:07:31.687489   58839 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 18:07:31.800269   58839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713809251.797204075
	
	I0422 18:07:31.800288   58839 fix.go:216] guest clock: 1713809251.797204075
	I0422 18:07:31.800297   58839 fix.go:229] Guest: 2024-04-22 18:07:31.797204075 +0000 UTC Remote: 2024-04-22 18:07:31.683185522 +0000 UTC m=+7.787878273 (delta=114.018553ms)
	I0422 18:07:31.800321   58839 fix.go:200] guest clock delta is within tolerance: 114.018553ms
	I0422 18:07:31.800341   58839 start.go:83] releasing machines lock for "pause-765072", held for 7.545396968s
	I0422 18:07:31.800359   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:31.800587   58839 main.go:141] libmachine: (pause-765072) Calling .GetIP
	I0422 18:07:31.803441   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.803889   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:31.803916   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.804103   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:31.804562   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:31.804727   58839 main.go:141] libmachine: (pause-765072) Calling .DriverName
	I0422 18:07:31.804819   58839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:07:31.804855   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:31.804915   58839 ssh_runner.go:195] Run: cat /version.json
	I0422 18:07:31.804939   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHHostname
	I0422 18:07:31.807437   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.807759   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.807824   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:31.807851   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.808014   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:31.808174   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:31.808310   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:31.808328   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:31.808340   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:31.808500   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHPort
	I0422 18:07:31.808495   58839 sshutil.go:53] new ssh client: &{IP:192.168.61.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/pause-765072/id_rsa Username:docker}
	I0422 18:07:31.808661   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHKeyPath
	I0422 18:07:31.808797   58839 main.go:141] libmachine: (pause-765072) Calling .GetSSHUsername
	I0422 18:07:31.808927   58839 sshutil.go:53] new ssh client: &{IP:192.168.61.61 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/pause-765072/id_rsa Username:docker}
	I0422 18:07:31.932033   58839 ssh_runner.go:195] Run: systemctl --version
	I0422 18:07:31.938701   58839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:07:32.103396   58839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:07:32.110167   58839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:07:32.110234   58839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:07:32.119451   58839 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 18:07:32.119478   58839 start.go:494] detecting cgroup driver to use...
	I0422 18:07:32.119551   58839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:07:32.135819   58839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:07:32.150168   58839 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:07:32.150222   58839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:07:32.163952   58839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:07:32.177726   58839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:07:32.311078   58839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:07:32.449844   58839 docker.go:233] disabling docker service ...
	I0422 18:07:32.449916   58839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:07:32.467434   58839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:07:32.481693   58839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:07:32.636551   58839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:07:32.770449   58839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:07:32.786564   58839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:07:32.807468   58839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:07:32.807534   58839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.819179   58839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:07:32.819266   58839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.830835   58839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.841506   58839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.852048   58839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:07:32.863301   58839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.874282   58839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.886649   58839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:07:32.897799   58839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:07:32.908259   58839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:07:32.918537   58839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:33.048294   58839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:07:38.678367   58839 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.630037174s)
	I0422 18:07:38.678390   58839 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:07:38.678436   58839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:07:38.686807   58839 start.go:562] Will wait 60s for crictl version
	I0422 18:07:38.686875   58839 ssh_runner.go:195] Run: which crictl
	I0422 18:07:38.695140   58839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:07:38.739531   58839 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:07:38.739634   58839 ssh_runner.go:195] Run: crio --version
	I0422 18:07:38.780687   58839 ssh_runner.go:195] Run: crio --version
	I0422 18:07:38.824662   58839 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:07:38.826373   58839 main.go:141] libmachine: (pause-765072) Calling .GetIP
	I0422 18:07:38.829467   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:38.829895   58839 main.go:141] libmachine: (pause-765072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:83:22", ip: ""} in network mk-pause-765072: {Iface:virbr1 ExpiryTime:2024-04-22 19:06:40 +0000 UTC Type:0 Mac:52:54:00:0a:83:22 Iaid: IPaddr:192.168.61.61 Prefix:24 Hostname:pause-765072 Clientid:01:52:54:00:0a:83:22}
	I0422 18:07:38.829920   58839 main.go:141] libmachine: (pause-765072) DBG | domain pause-765072 has defined IP address 192.168.61.61 and MAC address 52:54:00:0a:83:22 in network mk-pause-765072
	I0422 18:07:38.830241   58839 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:07:38.835331   58839 kubeadm.go:877] updating cluster {Name:pause-765072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:pause-765072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:07:38.835566   58839 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:07:38.835663   58839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:07:38.898831   58839 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:07:38.898859   58839 crio.go:433] Images already preloaded, skipping extraction
	I0422 18:07:38.898923   58839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:07:38.953231   58839 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:07:38.953260   58839 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:07:38.953269   58839 kubeadm.go:928] updating node { 192.168.61.61 8443 v1.30.0 crio true true} ...
	I0422 18:07:38.953399   58839 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-765072 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.61
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:pause-765072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:07:38.953496   58839 ssh_runner.go:195] Run: crio config
	I0422 18:07:39.029220   58839 cni.go:84] Creating CNI manager for ""
	I0422 18:07:39.029244   58839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:07:39.029259   58839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:07:39.029283   58839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-765072 NodeName:pause-765072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:07:39.029475   58839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-765072"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:07:39.029545   58839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:07:39.040717   58839 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:07:39.040788   58839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:07:39.051684   58839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0422 18:07:39.074250   58839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:07:39.095991   58839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 18:07:39.116556   58839 ssh_runner.go:195] Run: grep 192.168.61.61	control-plane.minikube.internal$ /etc/hosts
	I0422 18:07:39.120592   58839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:39.262014   58839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:39.285665   58839 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072 for IP: 192.168.61.61
	I0422 18:07:39.285691   58839 certs.go:194] generating shared ca certs ...
	I0422 18:07:39.285713   58839 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:39.285891   58839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:07:39.285951   58839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:07:39.285965   58839 certs.go:256] generating profile certs ...
	I0422 18:07:39.286077   58839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/client.key
	I0422 18:07:39.286161   58839 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.key.2103e6d4
	I0422 18:07:39.286238   58839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.key
	I0422 18:07:39.286378   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:07:39.286431   58839 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:07:39.286445   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:07:39.286476   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:07:39.286510   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:07:39.286545   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:07:39.286599   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:07:39.287192   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:07:39.321346   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:07:39.375654   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:07:39.450589   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:07:39.571660   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 18:07:39.774879   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:07:39.837135   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:07:40.068822   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:07:40.219903   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:07:40.294931   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:07:40.535331   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:07:40.571903   58839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:07:40.597853   58839 ssh_runner.go:195] Run: openssl version
	I0422 18:07:40.604886   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:07:40.622245   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.628069   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.628113   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.636693   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:07:40.652757   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:07:40.669734   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.676976   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.677045   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.683760   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:07:40.695849   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:07:40.717463   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.730038   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.730113   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.738632   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:07:40.763849   58839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:07:40.775058   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:07:40.785586   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:07:40.795662   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:07:40.806800   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:07:40.817025   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:07:40.831241   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:07:40.871295   58839 kubeadm.go:391] StartCluster: {Name:pause-765072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-765072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:40.871405   58839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:07:40.871502   58839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:07:40.982046   58839 cri.go:89] found id: "b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852"
	I0422 18:07:40.982072   58839 cri.go:89] found id: "51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4"
	I0422 18:07:40.982078   58839 cri.go:89] found id: "43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8"
	I0422 18:07:40.982083   58839 cri.go:89] found id: "2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1"
	I0422 18:07:40.982087   58839 cri.go:89] found id: "38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3"
	I0422 18:07:40.982092   58839 cri.go:89] found id: "84fd3075de664b9f68e7b8fdfd8cec26375e0ff92836ab99e3f54e8d6d6d7f36"
	I0422 18:07:40.982099   58839 cri.go:89] found id: "dcf7652be0f699e0db19217a9aab8413216377d7b3847a098b217f9ef052fe4b"
	I0422 18:07:40.982103   58839 cri.go:89] found id: "834eb361b43b8b4c40fc1794f936ca1d215c3cc0992bdf88c8e337052c6cadd7"
	I0422 18:07:40.982107   58839 cri.go:89] found id: "89d03f6aba35873808c26574f093d265a06b6324cabafb9a20d4ae7894000f1f"
	I0422 18:07:40.982124   58839 cri.go:89] found id: "8c4a87e9bd190ac8db2c0eeacea88ec9f94611dfa8008e84b9e829d26d98dbfe"
	I0422 18:07:40.982136   58839 cri.go:89] found id: "d76411d02b31e48091e276ce7fde6b7135abdfad858bc5e067d1def215f94045"
	I0422 18:07:40.982144   58839 cri.go:89] found id: ""
	I0422 18:07:40.982184   58839 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-765072 -n pause-765072
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-765072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-765072 logs -n 25: (2.060138453s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo cat              | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo cat              | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo find             | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo crio             | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-457191                       | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC | 22 Apr 24 18:04 UTC |
	| start   | -p stopped-upgrade-310712              | minikube                  | jenkins | v1.26.0 | 22 Apr 24 18:04 UTC | 22 Apr 24 18:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-461193 ssh cat      | force-systemd-flag-461193 | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-461193           | force-systemd-flag-461193 | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	| start   | -p running-upgrade-759056              | minikube                  | jenkins | v1.26.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-417483                 | offline-crio-417483       | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	| start   | -p pause-765072 --memory=2048          | pause-765072              | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:07 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-310712 stop            | minikube                  | jenkins | v1.26.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	| start   | -p stopped-upgrade-310712              | stopped-upgrade-310712    | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:07 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-759056              | running-upgrade-759056    | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:08 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-765072                        | pause-765072              | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC | 22 Apr 24 18:08 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-310712              | stopped-upgrade-310712    | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC | 22 Apr 24 18:07 UTC |
	| start   | -p NoKubernetes-799191                 | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC |                     |
	|         | --no-kubernetes                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20              |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-799191                 | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-759056              | running-upgrade-759056    | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:08 UTC |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:07:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:07:40.828002   59118 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:07:40.828152   59118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:40.828157   59118 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:40.828163   59118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:40.828463   59118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:07:40.829115   59118 out.go:298] Setting JSON to false
	I0422 18:07:40.830380   59118 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6606,"bootTime":1713802655,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:07:40.830455   59118 start.go:139] virtualization: kvm guest
	I0422 18:07:40.832557   59118 out.go:177] * [NoKubernetes-799191] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:07:40.834819   59118 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:07:40.834830   59118 notify.go:220] Checking for updates...
	I0422 18:07:40.836240   59118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:07:40.837542   59118 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:07:40.838936   59118 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:07:40.840358   59118 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:07:40.841868   59118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:07:40.843898   59118 config.go:182] Loaded profile config "kubernetes-upgrade-432126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:07:40.844012   59118 config.go:182] Loaded profile config "pause-765072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:07:40.844090   59118 config.go:182] Loaded profile config "running-upgrade-759056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0422 18:07:40.844169   59118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:07:40.883333   59118 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:07:40.884725   59118 start.go:297] selected driver: kvm2
	I0422 18:07:40.884732   59118 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:07:40.884742   59118 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:07:40.885062   59118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:07:40.885134   59118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:07:40.901031   59118 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:07:40.901081   59118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 18:07:40.901556   59118 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0422 18:07:40.901683   59118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 18:07:40.901741   59118 cni.go:84] Creating CNI manager for ""
	I0422 18:07:40.901749   59118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:07:40.901756   59118 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 18:07:40.901806   59118 start.go:340] cluster config:
	{Name:NoKubernetes-799191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:NoKubernetes-799191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:40.901898   59118 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:07:40.903762   59118 out.go:177] * Starting "NoKubernetes-799191" primary control-plane node in "NoKubernetes-799191" cluster
	I0422 18:07:40.905342   59118 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:07:40.905376   59118 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:07:40.905382   59118 cache.go:56] Caching tarball of preloaded images
	I0422 18:07:40.905471   59118 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:07:40.905477   59118 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:07:40.905561   59118 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/NoKubernetes-799191/config.json ...
	I0422 18:07:40.905573   59118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/NoKubernetes-799191/config.json: {Name:mk4d5ded59c2d126a1aed7dedfe9fda9116faf89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:40.905692   59118 start.go:360] acquireMachinesLock for NoKubernetes-799191: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:07:40.905716   59118 start.go:364] duration metric: took 15.013µs to acquireMachinesLock for "NoKubernetes-799191"
	I0422 18:07:40.905729   59118 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-799191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.0 ClusterName:NoKubernetes-799191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:07:40.905780   59118 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:07:38.399771   58620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:38.416193   58620 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:07:38.416273   58620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:07:38.428758   58620 api_server.go:72] duration metric: took 266.827242ms to wait for apiserver process to appear ...
	I0422 18:07:38.428789   58620 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:07:38.428811   58620 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0422 18:07:38.435940   58620 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0422 18:07:38.437395   58620 api_server.go:141] control plane version: v1.24.1
	I0422 18:07:38.437419   58620 api_server.go:131] duration metric: took 8.623883ms to wait for apiserver health ...
	I0422 18:07:38.437437   58620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:07:38.446351   58620 system_pods.go:59] 7 kube-system pods found
	I0422 18:07:38.446389   58620 system_pods.go:61] "coredns-6d4b75cb6d-5hm92" [e2f71325-37f2-4f07-9585-1423cf5aaf61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:07:38.446401   58620 system_pods.go:61] "etcd-running-upgrade-759056" [b8ed3a6d-1752-42a3-a68c-b6fe4bb367d7] Running
	I0422 18:07:38.446409   58620 system_pods.go:61] "kube-apiserver-running-upgrade-759056" [048336b6-bdd1-42df-ab45-4f18286a84fa] Running
	I0422 18:07:38.446414   58620 system_pods.go:61] "kube-controller-manager-running-upgrade-759056" [0532b6a8-c4b8-4d0b-ba6d-c684ee0c1dd6] Running
	I0422 18:07:38.446419   58620 system_pods.go:61] "kube-proxy-t49tq" [dcee080f-c328-4224-8eac-6ab7cfe3f4ef] Running
	I0422 18:07:38.446429   58620 system_pods.go:61] "kube-scheduler-running-upgrade-759056" [94ab7912-c066-4ca8-b141-21fe6eed4c0b] Running
	I0422 18:07:38.446437   58620 system_pods.go:61] "storage-provisioner" [2fb5ac6a-3b92-402f-b128-e1d82baeea45] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:07:38.446450   58620 system_pods.go:74] duration metric: took 9.007401ms to wait for pod list to return data ...
	I0422 18:07:38.446466   58620 kubeadm.go:576] duration metric: took 284.541465ms to wait for: map[apiserver:true system_pods:true]
	I0422 18:07:38.446491   58620 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:07:38.450453   58620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0422 18:07:38.450477   58620 node_conditions.go:123] node cpu capacity is 2
	I0422 18:07:38.450489   58620 node_conditions.go:105] duration metric: took 3.993274ms to run NodePressure ...
	I0422 18:07:38.450502   58620 start.go:240] waiting for startup goroutines ...
	I0422 18:07:38.542154   58620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:07:38.576758   58620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:07:39.029220   58839 cni.go:84] Creating CNI manager for ""
	I0422 18:07:39.029244   58839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:07:39.029259   58839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:07:39.029283   58839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-765072 NodeName:pause-765072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:07:39.029475   58839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-765072"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:07:39.029545   58839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:07:39.040717   58839 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:07:39.040788   58839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:07:39.051684   58839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0422 18:07:39.074250   58839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:07:39.095991   58839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 18:07:39.116556   58839 ssh_runner.go:195] Run: grep 192.168.61.61	control-plane.minikube.internal$ /etc/hosts
	I0422 18:07:39.120592   58839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:39.262014   58839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:39.285665   58839 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072 for IP: 192.168.61.61
	I0422 18:07:39.285691   58839 certs.go:194] generating shared ca certs ...
	I0422 18:07:39.285713   58839 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:39.285891   58839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:07:39.285951   58839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:07:39.285965   58839 certs.go:256] generating profile certs ...
	I0422 18:07:39.286077   58839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/client.key
	I0422 18:07:39.286161   58839 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.key.2103e6d4
	I0422 18:07:39.286238   58839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.key
	I0422 18:07:39.286378   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:07:39.286431   58839 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:07:39.286445   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:07:39.286476   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:07:39.286510   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:07:39.286545   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:07:39.286599   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:07:39.287192   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:07:39.321346   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:07:39.375654   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:07:39.450589   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:07:39.571660   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 18:07:39.774879   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:07:39.837135   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:07:40.068822   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:07:40.219903   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:07:40.294931   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:07:40.535331   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:07:40.571903   58839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:07:40.597853   58839 ssh_runner.go:195] Run: openssl version
	I0422 18:07:40.604886   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:07:40.622245   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.628069   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.628113   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.636693   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:07:40.652757   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:07:40.669734   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.676976   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.677045   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.683760   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:07:40.695849   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:07:40.717463   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.730038   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.730113   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.738632   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:07:40.763849   58839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:07:40.775058   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:07:40.785586   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:07:40.795662   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:07:40.806800   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:07:40.817025   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:07:40.831241   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:07:40.871295   58839 kubeadm.go:391] StartCluster: {Name:pause-765072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-765072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:40.871405   58839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:07:40.871502   58839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:07:40.982046   58839 cri.go:89] found id: "b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852"
	I0422 18:07:40.982072   58839 cri.go:89] found id: "51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4"
	I0422 18:07:40.982078   58839 cri.go:89] found id: "43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8"
	I0422 18:07:40.982083   58839 cri.go:89] found id: "2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1"
	I0422 18:07:40.982087   58839 cri.go:89] found id: "38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3"
	I0422 18:07:40.982092   58839 cri.go:89] found id: "84fd3075de664b9f68e7b8fdfd8cec26375e0ff92836ab99e3f54e8d6d6d7f36"
	I0422 18:07:40.982099   58839 cri.go:89] found id: "dcf7652be0f699e0db19217a9aab8413216377d7b3847a098b217f9ef052fe4b"
	I0422 18:07:40.982103   58839 cri.go:89] found id: "834eb361b43b8b4c40fc1794f936ca1d215c3cc0992bdf88c8e337052c6cadd7"
	I0422 18:07:40.982107   58839 cri.go:89] found id: "89d03f6aba35873808c26574f093d265a06b6324cabafb9a20d4ae7894000f1f"
	I0422 18:07:40.982124   58839 cri.go:89] found id: "8c4a87e9bd190ac8db2c0eeacea88ec9f94611dfa8008e84b9e829d26d98dbfe"
	I0422 18:07:40.982136   58839 cri.go:89] found id: "d76411d02b31e48091e276ce7fde6b7135abdfad858bc5e067d1def215f94045"
	I0422 18:07:40.982144   58839 cri.go:89] found id: ""
	I0422 18:07:40.982184   58839 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.610204147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0dd12591-f290-45f0-a002-ac430f853403 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.612851808Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6016d056-b9bb-4314-b104-687996a2a5c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.613436969Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809298613403149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6016d056-b9bb-4314-b104-687996a2a5c3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.614086041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55bffd64-6d76-4f45-b2c1-18097a609a16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.614185647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55bffd64-6d76-4f45-b2c1-18097a609a16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.614574224Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55bffd64-6d76-4f45-b2c1-18097a609a16 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.624418100Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e2a1044f-067c-4eca-b4c3-b986006ec33c name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.624748401Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bptnp,Uid:51051240-f9a4-4707-98f0-1a96508e4f42,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713809259773858921,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:07:22.263967954Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-765072,Uid:1af988aa59aefd3cfc2d02666f06a1c4,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1713809259536893318,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.61:8443,kubernetes.io/config.hash: 1af988aa59aefd3cfc2d02666f06a1c4,kubernetes.io/config.seen: 2024-04-22T18:07:08.300571808Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&PodSandboxMetadata{Name:kube-proxy-rmt6z,Uid:3e1f08e1-cd2e-440c-8685-c7728de99dda,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713809259527861984,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:07:22.092394094Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-765072,Uid:2fb3bfbeef7c2a6a40600616294bbe91,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713809259500104155,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2fb3bfbeef7c2a6a40600616294bbe91,kubernetes.io/config.seen: 2024-04-22T18:07:08.300573698Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0da339765d81ea3b700316c4
b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-765072,Uid:875c556405bfdbca5f3354efc1bfbe9f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713809259449358862,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 875c556405bfdbca5f3354efc1bfbe9f,kubernetes.io/config.seen: 2024-04-22T18:07:08.300572826Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&PodSandboxMetadata{Name:etcd-pause-765072,Uid:9d6a138efa284fdbb4eb5b76e4ab69c5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713809259415256004,Labels:map[string]string{component: etcd,io.kubernetes.contain
er.name: POD,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.61:2379,kubernetes.io/config.hash: 9d6a138efa284fdbb4eb5b76e4ab69c5,kubernetes.io/config.seen: 2024-04-22T18:07:08.300567656Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e2a1044f-067c-4eca-b4c3-b986006ec33c name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.625356441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41059212-14cf-4573-82a5-6a0049879ca1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.625412215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41059212-14cf-4573-82a5-6a0049879ca1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.625743260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41059212-14cf-4573-82a5-6a0049879ca1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.669918260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=924e795b-a9e3-455c-98bf-1cb318a36f08 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.670043993Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=924e795b-a9e3-455c-98bf-1cb318a36f08 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.671354967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d3f3ce6-c554-4d59-a146-2a3867eefe47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.672158947Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809298672124838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d3f3ce6-c554-4d59-a146-2a3867eefe47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.673262576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a281e156-fe63-433f-975b-4051c8dfa927 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.673358339Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a281e156-fe63-433f-975b-4051c8dfa927 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.673808120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a281e156-fe63-433f-975b-4051c8dfa927 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.718412593Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50c29f89-4c4c-4d03-9f36-b04e4356498d name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.718496832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50c29f89-4c4c-4d03-9f36-b04e4356498d name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.719981511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20f8cf5f-9bf1-4aa8-b608-fe30d8e50416 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.720563728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809298720531786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20f8cf5f-9bf1-4aa8-b608-fe30d8e50416 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.721575482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ade9722f-3dae-4c11-9772-7831c37056f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.721756537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ade9722f-3dae-4c11-9772-7831c37056f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:18 pause-765072 crio[2240]: time="2024-04-22 18:08:18.722136374Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ade9722f-3dae-4c11-9772-7831c37056f3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	175a04216880f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   21 seconds ago      Running             kube-proxy                2                   1e6810a8f951c       kube-proxy-rmt6z
	ca2f6ca2d9449       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago      Running             coredns                   2                   6809b2df45f33       coredns-7db6d8ff4d-bptnp
	68937c5c7607a       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   24 seconds ago      Running             kube-scheduler            2                   8184771e14fb9       kube-scheduler-pause-765072
	8fdfc2fc1d480       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   24 seconds ago      Running             kube-controller-manager   2                   0da339765d81e       kube-controller-manager-pause-765072
	caf64a2f4609a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   24 seconds ago      Running             kube-apiserver            2                   bdd4383be4446       kube-apiserver-pause-765072
	7b02a9b612306       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   24 seconds ago      Running             etcd                      2                   4988568dbc743       etcd-pause-765072
	a726f610c7cf7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   37 seconds ago      Exited              coredns                   1                   6809b2df45f33       coredns-7db6d8ff4d-bptnp
	b704240c06632       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   38 seconds ago      Exited              kube-apiserver            1                   bdd4383be4446       kube-apiserver-pause-765072
	51ba045af77ed       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   38 seconds ago      Exited              kube-proxy                1                   1e6810a8f951c       kube-proxy-rmt6z
	43c8d6592c3b6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   38 seconds ago      Exited              kube-scheduler            1                   8184771e14fb9       kube-scheduler-pause-765072
	2e8828dcd4752       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   38 seconds ago      Exited              kube-controller-manager   1                   0da339765d81e       kube-controller-manager-pause-765072
	38ed02eb29fd9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   39 seconds ago      Exited              etcd                      1                   4988568dbc743       etcd-pause-765072
	
	
	==> coredns [a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2] <==
	
	
	==> coredns [ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46716 - 23288 "HINFO IN 5070365188812453313.1735116471462260445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008196882s
	
	
	==> describe nodes <==
	Name:               pause-765072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-765072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=pause-765072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_07_09_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-765072
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:08:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.61
	  Hostname:    pause-765072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d506242eb66542b8ae4b5515f4a6dd65
	  System UUID:                d506242e-b665-42b8-ae4b-5515f4a6dd65
	  Boot ID:                    eb80f8f2-657f-4091-b697-3648d0b93dad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bptnp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     57s
	  kube-system                 etcd-pause-765072                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kube-apiserver-pause-765072             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-pause-765072    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-rmt6z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-scheduler-pause-765072             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientPID     71s                kubelet          Node pause-765072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node pause-765072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node pause-765072 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeReady                70s                kubelet          Node pause-765072 status is now: NodeReady
	  Normal  RegisteredNode           58s                node-controller  Node pause-765072 event: Registered Node pause-765072 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-765072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-765072 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-765072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-765072 event: Registered Node pause-765072 in Controller
	
	
	==> dmesg <==
	[  +0.063540] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079468] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.239604] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.145541] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.343627] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.751673] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +0.071680] kauditd_printk_skb: 130 callbacks suppressed
	[Apr22 18:07] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +1.172202] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.898803] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.100483] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.298828] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[ +10.788192] systemd-fstab-generator[2160]: Ignoring "noauto" option for root device
	[  +0.075630] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.062170] systemd-fstab-generator[2172]: Ignoring "noauto" option for root device
	[  +0.170536] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.152362] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.278671] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +6.204178] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.081343] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.365207] kauditd_printk_skb: 86 callbacks suppressed
	[  +1.498994] systemd-fstab-generator[3161]: Ignoring "noauto" option for root device
	[  +4.614935] kauditd_printk_skb: 43 callbacks suppressed
	[Apr22 18:08] kauditd_printk_skb: 4 callbacks suppressed
	[  +4.650058] systemd-fstab-generator[3626]: Ignoring "noauto" option for root device
	
	
	==> etcd [38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3] <==
	{"level":"info","ts":"2024-04-22T18:07:40.379346Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:41.500717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T18:07:41.500775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:07:41.500805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgPreVoteResp from 18554485c6f2b6a0 at term 2"}
	{"level":"info","ts":"2024-04-22T18:07:41.500825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.500831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgVoteResp from 18554485c6f2b6a0 at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.500839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.500846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18554485c6f2b6a0 elected leader 18554485c6f2b6a0 at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.504052Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"18554485c6f2b6a0","local-member-attributes":"{Name:pause-765072 ClientURLs:[https://192.168.61.61:2379]}","request-path":"/0/members/18554485c6f2b6a0/attributes","cluster-id":"41aa97de13f517c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:07:41.504237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:41.51527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:07:41.518938Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:41.520572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.61:2379"}
	{"level":"info","ts":"2024-04-22T18:07:41.525548Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:41.532183Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:51.92997Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T18:07:51.93014Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-765072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.61:2380"],"advertise-client-urls":["https://192.168.61.61:2379"]}
	{"level":"warn","ts":"2024-04-22T18:07:51.93037Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:07:51.930438Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:07:51.932283Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.61:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:07:51.93232Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.61:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T18:07:51.932383Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"18554485c6f2b6a0","current-leader-member-id":"18554485c6f2b6a0"}
	{"level":"info","ts":"2024-04-22T18:07:51.936819Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:51.937006Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:51.937024Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-765072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.61:2380"],"advertise-client-urls":["https://192.168.61.61:2379"]}
	
	
	==> etcd [7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60] <==
	{"level":"info","ts":"2024-04-22T18:07:54.252903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:07:54.25781Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:54.257847Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:54.257509Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:07:54.259761Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:07:54.259688Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"18554485c6f2b6a0","initial-advertise-peer-urls":["https://192.168.61.61:2380"],"listen-peer-urls":["https://192.168.61.61:2380"],"advertise-client-urls":["https://192.168.61.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:07:55.317728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:55.317831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:55.317867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgPreVoteResp from 18554485c6f2b6a0 at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:55.317897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became candidate at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.31792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgVoteResp from 18554485c6f2b6a0 at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.317966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became leader at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.317992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18554485c6f2b6a0 elected leader 18554485c6f2b6a0 at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.326014Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"18554485c6f2b6a0","local-member-attributes":"{Name:pause-765072 ClientURLs:[https://192.168.61.61:2379]}","request-path":"/0/members/18554485c6f2b6a0/attributes","cluster-id":"41aa97de13f517c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:07:55.326228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:55.326506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:55.328224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:07:55.329726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:55.329764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:55.329669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.61:2379"}
	{"level":"warn","ts":"2024-04-22T18:08:11.912159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.566709ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13159675371374078772 > lease_revoke:<id:36a08f06fd70abb7>","response":"size:27"}
	{"level":"info","ts":"2024-04-22T18:08:11.91254Z","caller":"traceutil/trace.go:171","msg":"trace[90961414] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:489; }","duration":"103.048695ms","start":"2024-04-22T18:08:11.809461Z","end":"2024-04-22T18:08:11.912509Z","steps":["trace[90961414] 'read index received'  (duration: 34.523µs)","trace[90961414] 'applied index is now lower than readState.Index'  (duration: 103.012465ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T18:08:11.913086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.591014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-765072\" ","response":"range_response_count:1 size:5653"}
	{"level":"info","ts":"2024-04-22T18:08:11.913181Z","caller":"traceutil/trace.go:171","msg":"trace[1532149853] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-765072; range_end:; response_count:1; response_revision:456; }","duration":"103.740247ms","start":"2024-04-22T18:08:11.809428Z","end":"2024-04-22T18:08:11.913168Z","steps":["trace[1532149853] 'agreement among raft nodes before linearized reading'  (duration: 103.568736ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T18:08:12.556596Z","caller":"traceutil/trace.go:171","msg":"trace[2001706110] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"181.114265ms","start":"2024-04-22T18:08:12.375463Z","end":"2024-04-22T18:08:12.556577Z","steps":["trace[2001706110] 'process raft request'  (duration: 180.704089ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:08:19 up 1 min,  0 users,  load average: 1.47, 0.57, 0.20
	Linux pause-765072 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852] <==
	I0422 18:07:43.533365       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0422 18:07:43.533520       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0422 18:07:43.536739       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 18:07:43.544226       1 controller.go:157] Shutting down quota evaluator
	I0422 18:07:43.544266       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544417       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544452       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544458       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544462       1 controller.go:176] quota evaluator worker shutdown
	W0422 18:07:44.344716       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:44.345167       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0422 18:07:45.344386       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:45.344393       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0422 18:07:46.344399       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:46.344408       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:47.343961       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:47.345376       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:48.344560       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:48.344964       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:49.344008       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:49.344582       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:50.343735       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:50.344512       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:51.344231       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:51.344819       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-apiserver [caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a] <==
	I0422 18:07:56.863568       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 18:07:56.867555       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 18:07:56.867606       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 18:07:56.869021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 18:07:56.869482       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 18:07:56.869541       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 18:07:56.863873       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 18:07:56.870936       1 aggregator.go:165] initial CRD sync complete...
	I0422 18:07:56.870945       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 18:07:56.870950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 18:07:56.870955       1 cache.go:39] Caches are synced for autoregister controller
	I0422 18:07:56.872052       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 18:07:56.873319       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 18:07:56.878465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 18:07:56.878554       1 policy_source.go:224] refreshing policies
	I0422 18:07:56.932028       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 18:07:57.780314       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 18:07:58.185163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.61]
	I0422 18:07:58.186552       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 18:07:58.192335       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 18:07:58.616380       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 18:07:58.633875       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 18:07:58.686958       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 18:07:58.721536       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 18:07:58.728895       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1] <==
	I0422 18:07:41.744788       1 serving.go:380] Generated self-signed cert in-memory
	I0422 18:07:42.296177       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 18:07:42.296225       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:42.297971       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0422 18:07:42.298165       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:07:42.298486       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 18:07:42.298501       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647] <==
	I0422 18:08:09.172581       1 shared_informer.go:320] Caches are synced for GC
	I0422 18:08:09.175032       1 shared_informer.go:320] Caches are synced for PV protection
	I0422 18:08:09.175079       1 shared_informer.go:320] Caches are synced for namespace
	I0422 18:08:09.184327       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0422 18:08:09.189882       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0422 18:08:09.196473       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 18:08:09.202378       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0422 18:08:09.205907       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0422 18:08:09.206137       1 shared_informer.go:320] Caches are synced for ephemeral
	I0422 18:08:09.208178       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0422 18:08:09.212690       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 18:08:09.215339       1 shared_informer.go:320] Caches are synced for deployment
	I0422 18:08:09.254926       1 shared_informer.go:320] Caches are synced for HPA
	I0422 18:08:09.302067       1 shared_informer.go:320] Caches are synced for disruption
	I0422 18:08:09.304988       1 shared_informer.go:320] Caches are synced for persistent volume
	I0422 18:08:09.338880       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 18:08:09.359055       1 shared_informer.go:320] Caches are synced for taint
	I0422 18:08:09.359312       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0422 18:08:09.363954       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-765072"
	I0422 18:08:09.364151       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0422 18:08:09.364268       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 18:08:09.373855       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 18:08:09.820970       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 18:08:09.830408       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 18:08:09.830511       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd] <==
	I0422 18:07:57.880998       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:07:57.905601       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.61"]
	I0422 18:07:57.969052       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:07:57.969183       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:07:57.969282       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:07:57.974437       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:07:57.974614       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:07:57.974693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:57.976318       1 config.go:192] "Starting service config controller"
	I0422 18:07:57.976354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:07:57.976381       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:07:57.976386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:07:57.976760       1 config.go:319] "Starting node config controller"
	I0422 18:07:57.976790       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:07:58.077803       1 shared_informer.go:320] Caches are synced for node config
	I0422 18:07:58.077849       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:07:58.077901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4] <==
	E0422 18:07:43.578415       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.61.61:8443: connect: connection refused"
	W0422 18:07:43.578516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:43.578560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:43.578706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:43.578765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:43.578813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:43.578859       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:44.584809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:44.584923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:44.919324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:44.919413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:45.152220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:45.152382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:46.610324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:46.610431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:46.616099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:46.616225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:47.305178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:47.305235       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:49.935797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:49.935907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:50.580809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:50.580973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:50.753099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:50.753206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	
	
	==> kube-scheduler [43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8] <==
	I0422 18:07:42.641717       1 serving.go:380] Generated self-signed cert in-memory
	W0422 18:07:43.376694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:07:43.377806       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:07:43.377894       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:07:43.377921       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:07:43.456580       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 18:07:43.456731       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:43.460526       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 18:07:43.461775       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 18:07:43.468738       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:07:43.461792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:07:43.571051       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:07:51.659817       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0422 18:07:51.660266       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0422 18:07:51.660371       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec] <==
	I0422 18:07:55.212299       1 serving.go:380] Generated self-signed cert in-memory
	W0422 18:07:56.814025       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:07:56.814079       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:07:56.814094       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:07:56.814100       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:07:56.845405       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 18:07:56.845448       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:56.849350       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 18:07:56.850964       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 18:07:56.851017       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:07:56.851037       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:07:56.951724       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.662829    3168 kubelet_node_status.go:73] "Attempting to register node" node="pause-765072"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: E0422 18:07:53.663921    3168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.61:8443: connect: connection refused" node="pause-765072"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.788809    3168 scope.go:117] "RemoveContainer" containerID="38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.791224    3168 scope.go:117] "RemoveContainer" containerID="b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.796064    3168 scope.go:117] "RemoveContainer" containerID="2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.798181    3168 scope.go:117] "RemoveContainer" containerID="43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: E0422 18:07:53.966102    3168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-765072?timeout=10s\": dial tcp 192.168.61.61:8443: connect: connection refused" interval="800ms"
	Apr 22 18:07:54 pause-765072 kubelet[3168]: I0422 18:07:54.071114    3168 kubelet_node_status.go:73] "Attempting to register node" node="pause-765072"
	Apr 22 18:07:54 pause-765072 kubelet[3168]: E0422 18:07:54.074157    3168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.61:8443: connect: connection refused" node="pause-765072"
	Apr 22 18:07:54 pause-765072 kubelet[3168]: W0422 18:07:54.174049    3168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	Apr 22 18:07:54 pause-765072 kubelet[3168]: E0422 18:07:54.174145    3168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	Apr 22 18:07:54 pause-765072 kubelet[3168]: I0422 18:07:54.875445    3168 kubelet_node_status.go:73] "Attempting to register node" node="pause-765072"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.905015    3168 kubelet_node_status.go:112] "Node was previously registered" node="pause-765072"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.905127    3168 kubelet_node_status.go:76] "Successfully registered node" node="pause-765072"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.908247    3168 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.909484    3168 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.348997    3168 apiserver.go:52] "Watching apiserver"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.352872    3168 topology_manager.go:215] "Topology Admit Handler" podUID="3e1f08e1-cd2e-440c-8685-c7728de99dda" podNamespace="kube-system" podName="kube-proxy-rmt6z"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.353029    3168 topology_manager.go:215] "Topology Admit Handler" podUID="51051240-f9a4-4707-98f0-1a96508e4f42" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bptnp"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.361670    3168 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.436788    3168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1f08e1-cd2e-440c-8685-c7728de99dda-xtables-lock\") pod \"kube-proxy-rmt6z\" (UID: \"3e1f08e1-cd2e-440c-8685-c7728de99dda\") " pod="kube-system/kube-proxy-rmt6z"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.436974    3168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1f08e1-cd2e-440c-8685-c7728de99dda-lib-modules\") pod \"kube-proxy-rmt6z\" (UID: \"3e1f08e1-cd2e-440c-8685-c7728de99dda\") " pod="kube-system/kube-proxy-rmt6z"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.654506    3168 scope.go:117] "RemoveContainer" containerID="a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.655075    3168 scope.go:117] "RemoveContainer" containerID="51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4"
	Apr 22 18:08:01 pause-765072 kubelet[3168]: I0422 18:08:01.249084    3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:08:18.211887   59598 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-765072 -n pause-765072
helpers_test.go:261: (dbg) Run:  kubectl --context pause-765072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-765072 -n pause-765072
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-765072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-765072 logs -n 25: (1.626828526s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo cat              | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo cat              | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo                  | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo find             | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-457191 sudo crio             | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-457191                       | cilium-457191             | jenkins | v1.33.0 | 22 Apr 24 18:04 UTC | 22 Apr 24 18:04 UTC |
	| start   | -p stopped-upgrade-310712              | minikube                  | jenkins | v1.26.0 | 22 Apr 24 18:04 UTC | 22 Apr 24 18:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-461193 ssh cat      | force-systemd-flag-461193 | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-461193           | force-systemd-flag-461193 | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	| start   | -p running-upgrade-759056              | minikube                  | jenkins | v1.26.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:06 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p offline-crio-417483                 | offline-crio-417483       | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	| start   | -p pause-765072 --memory=2048          | pause-765072              | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:07 UTC |
	|         | --install-addons=false                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-310712 stop            | minikube                  | jenkins | v1.26.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	| start   | -p stopped-upgrade-310712              | stopped-upgrade-310712    | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:07 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p running-upgrade-759056              | running-upgrade-759056    | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:08 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-765072                        | pause-765072              | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC | 22 Apr 24 18:08 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-310712              | stopped-upgrade-310712    | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC | 22 Apr 24 18:07 UTC |
	| start   | -p NoKubernetes-799191                 | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC |                     |
	|         | --no-kubernetes                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20              |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-799191                 | NoKubernetes-799191       | jenkins | v1.33.0 | 22 Apr 24 18:07 UTC |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-759056              | running-upgrade-759056    | jenkins | v1.33.0 | 22 Apr 24 18:08 UTC | 22 Apr 24 18:08 UTC |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:07:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:07:40.828002   59118 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:07:40.828152   59118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:40.828157   59118 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:40.828163   59118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:40.828463   59118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:07:40.829115   59118 out.go:298] Setting JSON to false
	I0422 18:07:40.830380   59118 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6606,"bootTime":1713802655,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:07:40.830455   59118 start.go:139] virtualization: kvm guest
	I0422 18:07:40.832557   59118 out.go:177] * [NoKubernetes-799191] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:07:40.834819   59118 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:07:40.834830   59118 notify.go:220] Checking for updates...
	I0422 18:07:40.836240   59118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:07:40.837542   59118 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:07:40.838936   59118 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:07:40.840358   59118 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:07:40.841868   59118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:07:40.843898   59118 config.go:182] Loaded profile config "kubernetes-upgrade-432126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:07:40.844012   59118 config.go:182] Loaded profile config "pause-765072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:07:40.844090   59118 config.go:182] Loaded profile config "running-upgrade-759056": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0422 18:07:40.844169   59118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:07:40.883333   59118 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:07:40.884725   59118 start.go:297] selected driver: kvm2
	I0422 18:07:40.884732   59118 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:07:40.884742   59118 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:07:40.885062   59118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:07:40.885134   59118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:07:40.901031   59118 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:07:40.901081   59118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 18:07:40.901556   59118 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0422 18:07:40.901683   59118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 18:07:40.901741   59118 cni.go:84] Creating CNI manager for ""
	I0422 18:07:40.901749   59118 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:07:40.901756   59118 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 18:07:40.901806   59118 start.go:340] cluster config:
	{Name:NoKubernetes-799191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:NoKubernetes-799191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:40.901898   59118 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:07:40.903762   59118 out.go:177] * Starting "NoKubernetes-799191" primary control-plane node in "NoKubernetes-799191" cluster
	I0422 18:07:40.905342   59118 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:07:40.905376   59118 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:07:40.905382   59118 cache.go:56] Caching tarball of preloaded images
	I0422 18:07:40.905471   59118 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:07:40.905477   59118 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:07:40.905561   59118 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/NoKubernetes-799191/config.json ...
	I0422 18:07:40.905573   59118 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/NoKubernetes-799191/config.json: {Name:mk4d5ded59c2d126a1aed7dedfe9fda9116faf89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:40.905692   59118 start.go:360] acquireMachinesLock for NoKubernetes-799191: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:07:40.905716   59118 start.go:364] duration metric: took 15.013µs to acquireMachinesLock for "NoKubernetes-799191"
	I0422 18:07:40.905729   59118 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-799191 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.30.0 ClusterName:NoKubernetes-799191 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:07:40.905780   59118 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:07:38.399771   58620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:38.416193   58620 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:07:38.416273   58620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:07:38.428758   58620 api_server.go:72] duration metric: took 266.827242ms to wait for apiserver process to appear ...
	I0422 18:07:38.428789   58620 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:07:38.428811   58620 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0422 18:07:38.435940   58620 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0422 18:07:38.437395   58620 api_server.go:141] control plane version: v1.24.1
	I0422 18:07:38.437419   58620 api_server.go:131] duration metric: took 8.623883ms to wait for apiserver health ...
	I0422 18:07:38.437437   58620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:07:38.446351   58620 system_pods.go:59] 7 kube-system pods found
	I0422 18:07:38.446389   58620 system_pods.go:61] "coredns-6d4b75cb6d-5hm92" [e2f71325-37f2-4f07-9585-1423cf5aaf61] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:07:38.446401   58620 system_pods.go:61] "etcd-running-upgrade-759056" [b8ed3a6d-1752-42a3-a68c-b6fe4bb367d7] Running
	I0422 18:07:38.446409   58620 system_pods.go:61] "kube-apiserver-running-upgrade-759056" [048336b6-bdd1-42df-ab45-4f18286a84fa] Running
	I0422 18:07:38.446414   58620 system_pods.go:61] "kube-controller-manager-running-upgrade-759056" [0532b6a8-c4b8-4d0b-ba6d-c684ee0c1dd6] Running
	I0422 18:07:38.446419   58620 system_pods.go:61] "kube-proxy-t49tq" [dcee080f-c328-4224-8eac-6ab7cfe3f4ef] Running
	I0422 18:07:38.446429   58620 system_pods.go:61] "kube-scheduler-running-upgrade-759056" [94ab7912-c066-4ca8-b141-21fe6eed4c0b] Running
	I0422 18:07:38.446437   58620 system_pods.go:61] "storage-provisioner" [2fb5ac6a-3b92-402f-b128-e1d82baeea45] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:07:38.446450   58620 system_pods.go:74] duration metric: took 9.007401ms to wait for pod list to return data ...
	I0422 18:07:38.446466   58620 kubeadm.go:576] duration metric: took 284.541465ms to wait for: map[apiserver:true system_pods:true]
	I0422 18:07:38.446491   58620 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:07:38.450453   58620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0422 18:07:38.450477   58620 node_conditions.go:123] node cpu capacity is 2
	I0422 18:07:38.450489   58620 node_conditions.go:105] duration metric: took 3.993274ms to run NodePressure ...
	I0422 18:07:38.450502   58620 start.go:240] waiting for startup goroutines ...
	I0422 18:07:38.542154   58620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:07:38.576758   58620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:07:39.029220   58839 cni.go:84] Creating CNI manager for ""
	I0422 18:07:39.029244   58839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:07:39.029259   58839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:07:39.029283   58839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.61 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-765072 NodeName:pause-765072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.61"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.61 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:07:39.029475   58839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.61
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-765072"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.61
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.61"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:07:39.029545   58839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:07:39.040717   58839 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:07:39.040788   58839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:07:39.051684   58839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0422 18:07:39.074250   58839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:07:39.095991   58839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0422 18:07:39.116556   58839 ssh_runner.go:195] Run: grep 192.168.61.61	control-plane.minikube.internal$ /etc/hosts
	I0422 18:07:39.120592   58839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:39.262014   58839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:39.285665   58839 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072 for IP: 192.168.61.61
	I0422 18:07:39.285691   58839 certs.go:194] generating shared ca certs ...
	I0422 18:07:39.285713   58839 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:39.285891   58839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:07:39.285951   58839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:07:39.285965   58839 certs.go:256] generating profile certs ...
	I0422 18:07:39.286077   58839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/client.key
	I0422 18:07:39.286161   58839 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.key.2103e6d4
	I0422 18:07:39.286238   58839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.key
	I0422 18:07:39.286378   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:07:39.286431   58839 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:07:39.286445   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:07:39.286476   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:07:39.286510   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:07:39.286545   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:07:39.286599   58839 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:07:39.287192   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:07:39.321346   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:07:39.375654   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:07:39.450589   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:07:39.571660   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 18:07:39.774879   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:07:39.837135   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:07:40.068822   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/pause-765072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:07:40.219903   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:07:40.294931   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:07:40.535331   58839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:07:40.571903   58839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:07:40.597853   58839 ssh_runner.go:195] Run: openssl version
	I0422 18:07:40.604886   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:07:40.622245   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.628069   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.628113   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:07:40.636693   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:07:40.652757   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:07:40.669734   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.676976   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.677045   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:40.683760   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:07:40.695849   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:07:40.717463   58839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.730038   58839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.730113   58839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:07:40.738632   58839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:07:40.763849   58839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:07:40.775058   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:07:40.785586   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:07:40.795662   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:07:40.806800   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:07:40.817025   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:07:40.831241   58839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:07:40.871295   58839 kubeadm.go:391] StartCluster: {Name:pause-765072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Cl
usterName:pause-765072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.61 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:40.871405   58839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:07:40.871502   58839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:07:40.982046   58839 cri.go:89] found id: "b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852"
	I0422 18:07:40.982072   58839 cri.go:89] found id: "51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4"
	I0422 18:07:40.982078   58839 cri.go:89] found id: "43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8"
	I0422 18:07:40.982083   58839 cri.go:89] found id: "2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1"
	I0422 18:07:40.982087   58839 cri.go:89] found id: "38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3"
	I0422 18:07:40.982092   58839 cri.go:89] found id: "84fd3075de664b9f68e7b8fdfd8cec26375e0ff92836ab99e3f54e8d6d6d7f36"
	I0422 18:07:40.982099   58839 cri.go:89] found id: "dcf7652be0f699e0db19217a9aab8413216377d7b3847a098b217f9ef052fe4b"
	I0422 18:07:40.982103   58839 cri.go:89] found id: "834eb361b43b8b4c40fc1794f936ca1d215c3cc0992bdf88c8e337052c6cadd7"
	I0422 18:07:40.982107   58839 cri.go:89] found id: "89d03f6aba35873808c26574f093d265a06b6324cabafb9a20d4ae7894000f1f"
	I0422 18:07:40.982124   58839 cri.go:89] found id: "8c4a87e9bd190ac8db2c0eeacea88ec9f94611dfa8008e84b9e829d26d98dbfe"
	I0422 18:07:40.982136   58839 cri.go:89] found id: "d76411d02b31e48091e276ce7fde6b7135abdfad858bc5e067d1def215f94045"
	I0422 18:07:40.982144   58839 cri.go:89] found id: ""
	I0422 18:07:40.982184   58839 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.866103190Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809300866070211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c1608d0-e0d4-4b85-8532-4a9e84ee16ed name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.867126529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2221bdef-c3d6-41d2-b9c9-8e3fd7677cdd name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.867225005Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2221bdef-c3d6-41d2-b9c9-8e3fd7677cdd name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.867802529Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2221bdef-c3d6-41d2-b9c9-8e3fd7677cdd name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.927179759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=890f2c96-6424-4444-861e-f8dffb9dc0be name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.927345468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=890f2c96-6424-4444-861e-f8dffb9dc0be name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.930518819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8a9126b-dcb6-48db-8fd2-1d70e28c2887 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.931743219Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809300931705523,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8a9126b-dcb6-48db-8fd2-1d70e28c2887 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.932602540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a874691-7ba0-44e2-b24f-ea4d00d203bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.932797268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a874691-7ba0-44e2-b24f-ea4d00d203bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.933148641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a874691-7ba0-44e2-b24f-ea4d00d203bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.992240024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcc2eb8e-40bb-4e0e-8512-f2c5dab3bea8 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.992383891Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcc2eb8e-40bb-4e0e-8512-f2c5dab3bea8 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.995264360Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6bfb0f4-d3f6-4406-abd0-fac3393f0b95 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.995974366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809300995935686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6bfb0f4-d3f6-4406-abd0-fac3393f0b95 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.996790508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dd9a664-973a-4c91-b860-68d63eda0d3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.996911141Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dd9a664-973a-4c91-b860-68d63eda0d3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:20 pause-765072 crio[2240]: time="2024-04-22 18:08:20.997276255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dd9a664-973a-4c91-b860-68d63eda0d3b name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.055221912Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e02ea8c9-2e7b-4cd6-90a6-4e5caf6d8bd7 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.055343311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e02ea8c9-2e7b-4cd6-90a6-4e5caf6d8bd7 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.057070097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3540ea5-063f-4e31-99f7-ff2402a7ccc6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.057687302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713809301057584347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3540ea5-063f-4e31-99f7-ff2402a7ccc6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.058580315Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92bbb57d-e05f-42e3-bb5e-5dd6b90ba468 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.058759968Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92bbb57d-e05f-42e3-bb5e-5dd6b90ba468 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:08:21 pause-765072 crio[2240]: time="2024-04-22 18:08:21.059137731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:1713809277687141941,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713809277669456978,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 70737402,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713809273843491657,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c
2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713809273827473762,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1
c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713809273831866024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f335
4efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713809273813572052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.k
ubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2,PodSandboxId:6809b2df45f33d11633502d309a173adeaa56ad15fc768d4171de7008fd5e89f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713809260873011184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bptnp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51051240-f9a4-4707-98f0-1a96508e4f42,},Annotations:map[string]string{io.kubernetes.container.hash: 707374
02,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4,PodSandboxId:1e6810a8f951ce61d9fe306f3976a898ae3a95afe5bbfa5fa627488a84436a4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_EXITED,CreatedAt:1713809259916838760,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.
name: kube-proxy-rmt6z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e1f08e1-cd2e-440c-8685-c7728de99dda,},Annotations:map[string]string{io.kubernetes.container.hash: a87032,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852,PodSandboxId:bdd4383be444600c53b9b7c2d62078b9079372675104a7da383d67d8c200e07a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713809259997462670,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-7
65072,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1af988aa59aefd3cfc2d02666f06a1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1d6f5361,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8,PodSandboxId:8184771e14fb95a9ea1eac058e78aa6f2f653954e2901dd73ac560e7a2ba90a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_EXITED,CreatedAt:1713809259870381498,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-765072,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fb3bfbeef7c2a6a40600616294bbe91,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1,PodSandboxId:0da339765d81ea3b700316c4b7e8bc84bf0bcecc703a48f24f8ac1c64d01ddc9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_EXITED,CreatedAt:1713809259817412824,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-765072,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 875c556405bfdbca5f3354efc1bfbe9f,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3,PodSandboxId:4988568dbc743d87d5c5b89582a0e93eca3e3f6214ebcf558b21bde40e6bc62f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713809259769817214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-765072,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 9d6a138efa284fdbb4eb5b76e4ab69c5,},Annotations:map[string]string{io.kubernetes.container.hash: 69870a37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92bbb57d-e05f-42e3-bb5e-5dd6b90ba468 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	175a04216880f       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   23 seconds ago      Running             kube-proxy                2                   1e6810a8f951c       kube-proxy-rmt6z
	ca2f6ca2d9449       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   23 seconds ago      Running             coredns                   2                   6809b2df45f33       coredns-7db6d8ff4d-bptnp
	68937c5c7607a       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   27 seconds ago      Running             kube-scheduler            2                   8184771e14fb9       kube-scheduler-pause-765072
	8fdfc2fc1d480       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   27 seconds ago      Running             kube-controller-manager   2                   0da339765d81e       kube-controller-manager-pause-765072
	caf64a2f4609a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   27 seconds ago      Running             kube-apiserver            2                   bdd4383be4446       kube-apiserver-pause-765072
	7b02a9b612306       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   27 seconds ago      Running             etcd                      2                   4988568dbc743       etcd-pause-765072
	a726f610c7cf7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   40 seconds ago      Exited              coredns                   1                   6809b2df45f33       coredns-7db6d8ff4d-bptnp
	b704240c06632       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   41 seconds ago      Exited              kube-apiserver            1                   bdd4383be4446       kube-apiserver-pause-765072
	51ba045af77ed       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   41 seconds ago      Exited              kube-proxy                1                   1e6810a8f951c       kube-proxy-rmt6z
	43c8d6592c3b6       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   41 seconds ago      Exited              kube-scheduler            1                   8184771e14fb9       kube-scheduler-pause-765072
	2e8828dcd4752       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   41 seconds ago      Exited              kube-controller-manager   1                   0da339765d81e       kube-controller-manager-pause-765072
	38ed02eb29fd9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   41 seconds ago      Exited              etcd                      1                   4988568dbc743       etcd-pause-765072
	
	
	==> coredns [a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2] <==
	
	
	==> coredns [ca2f6ca2d9449cc62a4b38bbcb3daf6ff72ccc9ec2aeb44c44ae2f0de820ac1c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46716 - 23288 "HINFO IN 5070365188812453313.1735116471462260445. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008196882s
	
	
	==> describe nodes <==
	Name:               pause-765072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-765072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=pause-765072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_07_09_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:07:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-765072
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:08:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:07:56 +0000   Mon, 22 Apr 2024 18:07:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.61
	  Hostname:    pause-765072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 d506242eb66542b8ae4b5515f4a6dd65
	  System UUID:                d506242e-b665-42b8-ae4b-5515f4a6dd65
	  Boot ID:                    eb80f8f2-657f-4091-b697-3648d0b93dad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bptnp                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-pause-765072                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kube-apiserver-pause-765072             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-pause-765072    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-rmt6z                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-pause-765072             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientPID     73s                kubelet          Node pause-765072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node pause-765072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node pause-765072 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeReady                72s                kubelet          Node pause-765072 status is now: NodeReady
	  Normal  RegisteredNode           60s                node-controller  Node pause-765072 event: Registered Node pause-765072 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-765072 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-765072 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node pause-765072 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-765072 event: Registered Node pause-765072 in Controller
	
	
	==> dmesg <==
	[  +0.063540] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.079468] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.239604] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.145541] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.343627] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +4.751673] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +0.071680] kauditd_printk_skb: 130 callbacks suppressed
	[Apr22 18:07] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +1.172202] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.898803] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.100483] kauditd_printk_skb: 52 callbacks suppressed
	[ +13.298828] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[ +10.788192] systemd-fstab-generator[2160]: Ignoring "noauto" option for root device
	[  +0.075630] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.062170] systemd-fstab-generator[2172]: Ignoring "noauto" option for root device
	[  +0.170536] systemd-fstab-generator[2186]: Ignoring "noauto" option for root device
	[  +0.152362] systemd-fstab-generator[2198]: Ignoring "noauto" option for root device
	[  +0.278671] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +6.204178] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.081343] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.365207] kauditd_printk_skb: 86 callbacks suppressed
	[  +1.498994] systemd-fstab-generator[3161]: Ignoring "noauto" option for root device
	[  +4.614935] kauditd_printk_skb: 43 callbacks suppressed
	[Apr22 18:08] kauditd_printk_skb: 4 callbacks suppressed
	[  +4.650058] systemd-fstab-generator[3626]: Ignoring "noauto" option for root device
	
	
	==> etcd [38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3] <==
	{"level":"info","ts":"2024-04-22T18:07:40.379346Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:41.500717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-22T18:07:41.500775Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:07:41.500805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgPreVoteResp from 18554485c6f2b6a0 at term 2"}
	{"level":"info","ts":"2024-04-22T18:07:41.500825Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became candidate at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.500831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgVoteResp from 18554485c6f2b6a0 at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.500839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became leader at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.500846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18554485c6f2b6a0 elected leader 18554485c6f2b6a0 at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:41.504052Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"18554485c6f2b6a0","local-member-attributes":"{Name:pause-765072 ClientURLs:[https://192.168.61.61:2379]}","request-path":"/0/members/18554485c6f2b6a0/attributes","cluster-id":"41aa97de13f517c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:07:41.504237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:41.51527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:07:41.518938Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:41.520572Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.61:2379"}
	{"level":"info","ts":"2024-04-22T18:07:41.525548Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:41.532183Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:51.92997Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-22T18:07:51.93014Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-765072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.61:2380"],"advertise-client-urls":["https://192.168.61.61:2379"]}
	{"level":"warn","ts":"2024-04-22T18:07:51.93037Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:07:51.930438Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:07:51.932283Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.61:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-22T18:07:51.93232Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.61:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-22T18:07:51.932383Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"18554485c6f2b6a0","current-leader-member-id":"18554485c6f2b6a0"}
	{"level":"info","ts":"2024-04-22T18:07:51.936819Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:51.937006Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:51.937024Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-765072","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.61:2380"],"advertise-client-urls":["https://192.168.61.61:2379"]}
	
	
	==> etcd [7b02a9b612306cee06cc83a3e92d7fb332a97a2f7ebaa139fb9ce4292065ae60] <==
	{"level":"info","ts":"2024-04-22T18:07:54.252903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:07:54.25781Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:54.257847Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.61:2380"}
	{"level":"info","ts":"2024-04-22T18:07:54.257509Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:07:54.259761Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:07:54.259688Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"18554485c6f2b6a0","initial-advertise-peer-urls":["https://192.168.61.61:2380"],"listen-peer-urls":["https://192.168.61.61:2380"],"advertise-client-urls":["https://192.168.61.61:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.61:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:07:55.317728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:55.317831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:55.317867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgPreVoteResp from 18554485c6f2b6a0 at term 3"}
	{"level":"info","ts":"2024-04-22T18:07:55.317897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became candidate at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.31792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 received MsgVoteResp from 18554485c6f2b6a0 at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.317966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"18554485c6f2b6a0 became leader at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.317992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 18554485c6f2b6a0 elected leader 18554485c6f2b6a0 at term 4"}
	{"level":"info","ts":"2024-04-22T18:07:55.326014Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"18554485c6f2b6a0","local-member-attributes":"{Name:pause-765072 ClientURLs:[https://192.168.61.61:2379]}","request-path":"/0/members/18554485c6f2b6a0/attributes","cluster-id":"41aa97de13f517c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:07:55.326228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:55.326506Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:07:55.328224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:07:55.329726Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:55.329764Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:07:55.329669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.61:2379"}
	{"level":"warn","ts":"2024-04-22T18:08:11.912159Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.566709ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13159675371374078772 > lease_revoke:<id:36a08f06fd70abb7>","response":"size:27"}
	{"level":"info","ts":"2024-04-22T18:08:11.91254Z","caller":"traceutil/trace.go:171","msg":"trace[90961414] linearizableReadLoop","detail":"{readStateIndex:490; appliedIndex:489; }","duration":"103.048695ms","start":"2024-04-22T18:08:11.809461Z","end":"2024-04-22T18:08:11.912509Z","steps":["trace[90961414] 'read index received'  (duration: 34.523µs)","trace[90961414] 'applied index is now lower than readState.Index'  (duration: 103.012465ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-22T18:08:11.913086Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.591014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-765072\" ","response":"range_response_count:1 size:5653"}
	{"level":"info","ts":"2024-04-22T18:08:11.913181Z","caller":"traceutil/trace.go:171","msg":"trace[1532149853] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-765072; range_end:; response_count:1; response_revision:456; }","duration":"103.740247ms","start":"2024-04-22T18:08:11.809428Z","end":"2024-04-22T18:08:11.913168Z","steps":["trace[1532149853] 'agreement among raft nodes before linearized reading'  (duration: 103.568736ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-22T18:08:12.556596Z","caller":"traceutil/trace.go:171","msg":"trace[2001706110] transaction","detail":"{read_only:false; response_revision:457; number_of_response:1; }","duration":"181.114265ms","start":"2024-04-22T18:08:12.375463Z","end":"2024-04-22T18:08:12.556577Z","steps":["trace[2001706110] 'process raft request'  (duration: 180.704089ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:08:21 up 1 min,  0 users,  load average: 1.47, 0.57, 0.20
	Linux pause-765072 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852] <==
	I0422 18:07:43.533365       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0422 18:07:43.533520       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0422 18:07:43.536739       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0422 18:07:43.544226       1 controller.go:157] Shutting down quota evaluator
	I0422 18:07:43.544266       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544417       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544452       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544458       1 controller.go:176] quota evaluator worker shutdown
	I0422 18:07:43.544462       1 controller.go:176] quota evaluator worker shutdown
	W0422 18:07:44.344716       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:44.345167       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0422 18:07:45.344386       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:45.344393       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0422 18:07:46.344399       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:46.344408       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:47.343961       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:47.345376       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:48.344560       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:48.344964       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:49.344008       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:49.344582       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:50.343735       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:50.344512       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0422 18:07:51.344231       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0422 18:07:51.344819       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	
	
	==> kube-apiserver [caf64a2f4609aaeed35fabf2d1d832a697e91e06069b357d481d62dc4b30643a] <==
	I0422 18:07:56.863568       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0422 18:07:56.867555       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0422 18:07:56.867606       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0422 18:07:56.869021       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0422 18:07:56.869482       1 shared_informer.go:320] Caches are synced for configmaps
	I0422 18:07:56.869541       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0422 18:07:56.863873       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0422 18:07:56.870936       1 aggregator.go:165] initial CRD sync complete...
	I0422 18:07:56.870945       1 autoregister_controller.go:141] Starting autoregister controller
	I0422 18:07:56.870950       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0422 18:07:56.870955       1 cache.go:39] Caches are synced for autoregister controller
	I0422 18:07:56.872052       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0422 18:07:56.873319       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0422 18:07:56.878465       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0422 18:07:56.878554       1 policy_source.go:224] refreshing policies
	I0422 18:07:56.932028       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0422 18:07:57.780314       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0422 18:07:58.185163       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.61]
	I0422 18:07:58.186552       1 controller.go:615] quota admission added evaluator for: endpoints
	I0422 18:07:58.192335       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0422 18:07:58.616380       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0422 18:07:58.633875       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0422 18:07:58.686958       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0422 18:07:58.721536       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0422 18:07:58.728895       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1] <==
	I0422 18:07:41.744788       1 serving.go:380] Generated self-signed cert in-memory
	I0422 18:07:42.296177       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0422 18:07:42.296225       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:42.297971       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0422 18:07:42.298165       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:07:42.298486       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0422 18:07:42.298501       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [8fdfc2fc1d480b64648a1753a4122d54ad9d72139fb013469e560f2b48ff2647] <==
	I0422 18:08:09.172581       1 shared_informer.go:320] Caches are synced for GC
	I0422 18:08:09.175032       1 shared_informer.go:320] Caches are synced for PV protection
	I0422 18:08:09.175079       1 shared_informer.go:320] Caches are synced for namespace
	I0422 18:08:09.184327       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0422 18:08:09.189882       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0422 18:08:09.196473       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0422 18:08:09.202378       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0422 18:08:09.205907       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0422 18:08:09.206137       1 shared_informer.go:320] Caches are synced for ephemeral
	I0422 18:08:09.208178       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0422 18:08:09.212690       1 shared_informer.go:320] Caches are synced for daemon sets
	I0422 18:08:09.215339       1 shared_informer.go:320] Caches are synced for deployment
	I0422 18:08:09.254926       1 shared_informer.go:320] Caches are synced for HPA
	I0422 18:08:09.302067       1 shared_informer.go:320] Caches are synced for disruption
	I0422 18:08:09.304988       1 shared_informer.go:320] Caches are synced for persistent volume
	I0422 18:08:09.338880       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 18:08:09.359055       1 shared_informer.go:320] Caches are synced for taint
	I0422 18:08:09.359312       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0422 18:08:09.363954       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-765072"
	I0422 18:08:09.364151       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0422 18:08:09.364268       1 shared_informer.go:320] Caches are synced for stateful set
	I0422 18:08:09.373855       1 shared_informer.go:320] Caches are synced for resource quota
	I0422 18:08:09.820970       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 18:08:09.830408       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 18:08:09.830511       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [175a04216880f86785f441bd0e3100efa4858d111be63e46a37e0f3cc4e5c8cd] <==
	I0422 18:07:57.880998       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:07:57.905601       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.61"]
	I0422 18:07:57.969052       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:07:57.969183       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:07:57.969282       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:07:57.974437       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:07:57.974614       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:07:57.974693       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:57.976318       1 config.go:192] "Starting service config controller"
	I0422 18:07:57.976354       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:07:57.976381       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:07:57.976386       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:07:57.976760       1 config.go:319] "Starting node config controller"
	I0422 18:07:57.976790       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:07:58.077803       1 shared_informer.go:320] Caches are synced for node config
	I0422 18:07:58.077849       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:07:58.077901       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4] <==
	E0422 18:07:43.578415       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.61.61:8443: connect: connection refused"
	W0422 18:07:43.578516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:43.578560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:43.578706       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:43.578765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:43.578813       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:43.578859       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:44.584809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:44.584923       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:44.919324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:44.919413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:45.152220       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:45.152382       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:46.610324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:46.610431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:46.616099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:46.616225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:47.305178       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:47.305235       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:49.935797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:49.935907       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:50.580809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:50.580973       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-765072&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	W0422 18:07:50.753099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	E0422 18:07:50.753206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	
	
	==> kube-scheduler [43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8] <==
	I0422 18:07:42.641717       1 serving.go:380] Generated self-signed cert in-memory
	W0422 18:07:43.376694       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:07:43.377806       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:07:43.377894       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:07:43.377921       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:07:43.456580       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 18:07:43.456731       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:43.460526       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 18:07:43.461775       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 18:07:43.468738       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:07:43.461792       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:07:43.571051       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:07:51.659817       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0422 18:07:51.660266       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0422 18:07:51.660371       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [68937c5c7607adbf2ceb29107371c29e1c506840263c130190f5cb95fc4a87ec] <==
	I0422 18:07:55.212299       1 serving.go:380] Generated self-signed cert in-memory
	W0422 18:07:56.814025       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:07:56.814079       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:07:56.814094       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:07:56.814100       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:07:56.845405       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0422 18:07:56.845448       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:07:56.849350       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0422 18:07:56.850964       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0422 18:07:56.851017       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:07:56.851037       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0422 18:07:56.951724       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.662829    3168 kubelet_node_status.go:73] "Attempting to register node" node="pause-765072"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: E0422 18:07:53.663921    3168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.61:8443: connect: connection refused" node="pause-765072"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.788809    3168 scope.go:117] "RemoveContainer" containerID="38ed02eb29fd9a5ade64bcb8fc2099837fc19b8c850c29234446acaa9bdc77b3"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.791224    3168 scope.go:117] "RemoveContainer" containerID="b704240c06632cc5db01ded7ef563efc6b46502e752ee39ae56f85ffd9519852"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.796064    3168 scope.go:117] "RemoveContainer" containerID="2e8828dcd4752e71d34698f314b2bbedb59880148791de7d59a802471b2833e1"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: I0422 18:07:53.798181    3168 scope.go:117] "RemoveContainer" containerID="43c8d6592c3b69ca5ee16a064f35fcdad7fb550112ba2cb41d7e4e943e383eb8"
	Apr 22 18:07:53 pause-765072 kubelet[3168]: E0422 18:07:53.966102    3168 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-765072?timeout=10s\": dial tcp 192.168.61.61:8443: connect: connection refused" interval="800ms"
	Apr 22 18:07:54 pause-765072 kubelet[3168]: I0422 18:07:54.071114    3168 kubelet_node_status.go:73] "Attempting to register node" node="pause-765072"
	Apr 22 18:07:54 pause-765072 kubelet[3168]: E0422 18:07:54.074157    3168 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.61:8443: connect: connection refused" node="pause-765072"
	Apr 22 18:07:54 pause-765072 kubelet[3168]: W0422 18:07:54.174049    3168 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	Apr 22 18:07:54 pause-765072 kubelet[3168]: E0422 18:07:54.174145    3168 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.61:8443: connect: connection refused
	Apr 22 18:07:54 pause-765072 kubelet[3168]: I0422 18:07:54.875445    3168 kubelet_node_status.go:73] "Attempting to register node" node="pause-765072"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.905015    3168 kubelet_node_status.go:112] "Node was previously registered" node="pause-765072"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.905127    3168 kubelet_node_status.go:76] "Successfully registered node" node="pause-765072"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.908247    3168 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 22 18:07:56 pause-765072 kubelet[3168]: I0422 18:07:56.909484    3168 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.348997    3168 apiserver.go:52] "Watching apiserver"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.352872    3168 topology_manager.go:215] "Topology Admit Handler" podUID="3e1f08e1-cd2e-440c-8685-c7728de99dda" podNamespace="kube-system" podName="kube-proxy-rmt6z"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.353029    3168 topology_manager.go:215] "Topology Admit Handler" podUID="51051240-f9a4-4707-98f0-1a96508e4f42" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bptnp"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.361670    3168 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.436788    3168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1f08e1-cd2e-440c-8685-c7728de99dda-xtables-lock\") pod \"kube-proxy-rmt6z\" (UID: \"3e1f08e1-cd2e-440c-8685-c7728de99dda\") " pod="kube-system/kube-proxy-rmt6z"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.436974    3168 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1f08e1-cd2e-440c-8685-c7728de99dda-lib-modules\") pod \"kube-proxy-rmt6z\" (UID: \"3e1f08e1-cd2e-440c-8685-c7728de99dda\") " pod="kube-system/kube-proxy-rmt6z"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.654506    3168 scope.go:117] "RemoveContainer" containerID="a726f610c7cf753bb92579d4f4dd1a86c412f62355a3a7195d9f6ea5b5dc46e2"
	Apr 22 18:07:57 pause-765072 kubelet[3168]: I0422 18:07:57.655075    3168 scope.go:117] "RemoveContainer" containerID="51ba045af77ed363d1cf376cb3894246b654342879a01868ec657e05b3dbf8f4"
	Apr 22 18:08:01 pause-765072 kubelet[3168]: I0422 18:08:01.249084    3168 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:08:20.484400   59776 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18706-11572/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-765072 -n pause-765072
helpers_test.go:261: (dbg) Run:  kubectl --context pause-765072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (277.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-367072 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0422 18:15:07.902462   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-367072 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m37.161525715s)

                                                
                                                
-- stdout --
	* [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:15:05.483379   70624 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:15:05.483667   70624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:15:05.483701   70624 out.go:304] Setting ErrFile to fd 2...
	I0422 18:15:05.483717   70624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:15:05.484166   70624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:15:05.485223   70624 out.go:298] Setting JSON to false
	I0422 18:15:05.486301   70624 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7051,"bootTime":1713802655,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:15:05.486368   70624 start.go:139] virtualization: kvm guest
	I0422 18:15:05.489103   70624 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:15:05.490668   70624 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:15:05.490686   70624 notify.go:220] Checking for updates...
	I0422 18:15:05.492244   70624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:15:05.494019   70624 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:15:05.495649   70624 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:15:05.497015   70624 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:15:05.498396   70624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:15:05.500339   70624 config.go:182] Loaded profile config "bridge-457191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:15:05.500497   70624 config.go:182] Loaded profile config "calico-457191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:15:05.500632   70624 config.go:182] Loaded profile config "flannel-457191": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:15:05.500790   70624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:15:05.538551   70624 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:15:05.539894   70624 start.go:297] selected driver: kvm2
	I0422 18:15:05.539910   70624 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:15:05.539923   70624 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:15:05.540701   70624 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:15:05.540776   70624 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:15:05.556106   70624 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:15:05.556168   70624 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 18:15:05.556387   70624 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:15:05.556465   70624 cni.go:84] Creating CNI manager for ""
	I0422 18:15:05.556489   70624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:15:05.556502   70624 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 18:15:05.556572   70624 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:15:05.556686   70624 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:15:05.558672   70624 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:15:05.560008   70624 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:15:05.560056   70624 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:15:05.560063   70624 cache.go:56] Caching tarball of preloaded images
	I0422 18:15:05.560160   70624 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:15:05.560177   70624 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:15:05.560293   70624 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:15:05.560315   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json: {Name:mk657450927ff58f96fa3de4d1e9eca2a32e43f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:05.560447   70624 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:15:05.560486   70624 start.go:364] duration metric: took 17.03µs to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:15:05.560504   70624 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:15:05.560562   70624 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:15:05.562333   70624 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 18:15:05.562514   70624 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:15:05.562561   70624 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:15:05.577990   70624 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37139
	I0422 18:15:05.578524   70624 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:15:05.579098   70624 main.go:141] libmachine: Using API Version  1
	I0422 18:15:05.579133   70624 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:15:05.579481   70624 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:15:05.579694   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:15:05.579864   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:05.580036   70624 start.go:159] libmachine.API.Create for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:15:05.580064   70624 client.go:168] LocalClient.Create starting
	I0422 18:15:05.580102   70624 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 18:15:05.580153   70624 main.go:141] libmachine: Decoding PEM data...
	I0422 18:15:05.580176   70624 main.go:141] libmachine: Parsing certificate...
	I0422 18:15:05.580240   70624 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 18:15:05.580276   70624 main.go:141] libmachine: Decoding PEM data...
	I0422 18:15:05.580292   70624 main.go:141] libmachine: Parsing certificate...
	I0422 18:15:05.580313   70624 main.go:141] libmachine: Running pre-create checks...
	I0422 18:15:05.580322   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .PreCreateCheck
	I0422 18:15:05.580773   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:15:05.581235   70624 main.go:141] libmachine: Creating machine...
	I0422 18:15:05.581255   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .Create
	I0422 18:15:05.581419   70624 main.go:141] libmachine: (old-k8s-version-367072) Creating KVM machine...
	I0422 18:15:05.582783   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found existing default KVM network
	I0422 18:15:05.584287   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:05.584086   70646 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9d:98:b4} reservation:<nil>}
	I0422 18:15:05.585359   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:05.585261   70646 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:e4:a0:9e} reservation:<nil>}
	I0422 18:15:05.586380   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:05.586291   70646 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c5:c0:e9} reservation:<nil>}
	I0422 18:15:05.587459   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:05.587367   70646 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000328710}
	I0422 18:15:05.587482   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | created network xml: 
	I0422 18:15:05.587496   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | <network>
	I0422 18:15:05.587511   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |   <name>mk-old-k8s-version-367072</name>
	I0422 18:15:05.587522   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |   <dns enable='no'/>
	I0422 18:15:05.587532   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |   
	I0422 18:15:05.587543   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0422 18:15:05.587555   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |     <dhcp>
	I0422 18:15:05.587567   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0422 18:15:05.587578   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |     </dhcp>
	I0422 18:15:05.587588   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |   </ip>
	I0422 18:15:05.587598   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG |   
	I0422 18:15:05.587610   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | </network>
	I0422 18:15:05.587622   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | 
	I0422 18:15:05.593120   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | trying to create private KVM network mk-old-k8s-version-367072 192.168.72.0/24...
	I0422 18:15:05.680488   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | private KVM network mk-old-k8s-version-367072 192.168.72.0/24 created
	I0422 18:15:05.680686   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072 ...
	I0422 18:15:05.680719   70624 main.go:141] libmachine: (old-k8s-version-367072) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 18:15:05.680751   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:05.680610   70646 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:15:05.680846   70624 main.go:141] libmachine: (old-k8s-version-367072) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 18:15:05.916869   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:05.916723   70646 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa...
	I0422 18:15:06.136026   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:06.135881   70646 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/old-k8s-version-367072.rawdisk...
	I0422 18:15:06.136065   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Writing magic tar header
	I0422 18:15:06.136084   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Writing SSH key tar header
	I0422 18:15:06.136097   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:06.136034   70646 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072 ...
	I0422 18:15:06.136184   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072
	I0422 18:15:06.136227   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072 (perms=drwx------)
	I0422 18:15:06.136244   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 18:15:06.136263   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:15:06.136275   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 18:15:06.136290   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 18:15:06.136304   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 18:15:06.136317   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 18:15:06.136329   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home/jenkins
	I0422 18:15:06.136342   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Checking permissions on dir: /home
	I0422 18:15:06.136353   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Skipping /home - not owner
	I0422 18:15:06.136366   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 18:15:06.136385   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 18:15:06.136397   70624 main.go:141] libmachine: (old-k8s-version-367072) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 18:15:06.136415   70624 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:15:06.137555   70624 main.go:141] libmachine: (old-k8s-version-367072) define libvirt domain using xml: 
	I0422 18:15:06.137585   70624 main.go:141] libmachine: (old-k8s-version-367072) <domain type='kvm'>
	I0422 18:15:06.137596   70624 main.go:141] libmachine: (old-k8s-version-367072)   <name>old-k8s-version-367072</name>
	I0422 18:15:06.137612   70624 main.go:141] libmachine: (old-k8s-version-367072)   <memory unit='MiB'>2200</memory>
	I0422 18:15:06.137625   70624 main.go:141] libmachine: (old-k8s-version-367072)   <vcpu>2</vcpu>
	I0422 18:15:06.137634   70624 main.go:141] libmachine: (old-k8s-version-367072)   <features>
	I0422 18:15:06.137644   70624 main.go:141] libmachine: (old-k8s-version-367072)     <acpi/>
	I0422 18:15:06.137651   70624 main.go:141] libmachine: (old-k8s-version-367072)     <apic/>
	I0422 18:15:06.137656   70624 main.go:141] libmachine: (old-k8s-version-367072)     <pae/>
	I0422 18:15:06.137663   70624 main.go:141] libmachine: (old-k8s-version-367072)     
	I0422 18:15:06.137668   70624 main.go:141] libmachine: (old-k8s-version-367072)   </features>
	I0422 18:15:06.137675   70624 main.go:141] libmachine: (old-k8s-version-367072)   <cpu mode='host-passthrough'>
	I0422 18:15:06.137705   70624 main.go:141] libmachine: (old-k8s-version-367072)   
	I0422 18:15:06.137729   70624 main.go:141] libmachine: (old-k8s-version-367072)   </cpu>
	I0422 18:15:06.137743   70624 main.go:141] libmachine: (old-k8s-version-367072)   <os>
	I0422 18:15:06.137754   70624 main.go:141] libmachine: (old-k8s-version-367072)     <type>hvm</type>
	I0422 18:15:06.137764   70624 main.go:141] libmachine: (old-k8s-version-367072)     <boot dev='cdrom'/>
	I0422 18:15:06.137774   70624 main.go:141] libmachine: (old-k8s-version-367072)     <boot dev='hd'/>
	I0422 18:15:06.137789   70624 main.go:141] libmachine: (old-k8s-version-367072)     <bootmenu enable='no'/>
	I0422 18:15:06.137800   70624 main.go:141] libmachine: (old-k8s-version-367072)   </os>
	I0422 18:15:06.137830   70624 main.go:141] libmachine: (old-k8s-version-367072)   <devices>
	I0422 18:15:06.137845   70624 main.go:141] libmachine: (old-k8s-version-367072)     <disk type='file' device='cdrom'>
	I0422 18:15:06.137861   70624 main.go:141] libmachine: (old-k8s-version-367072)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/boot2docker.iso'/>
	I0422 18:15:06.137874   70624 main.go:141] libmachine: (old-k8s-version-367072)       <target dev='hdc' bus='scsi'/>
	I0422 18:15:06.137884   70624 main.go:141] libmachine: (old-k8s-version-367072)       <readonly/>
	I0422 18:15:06.137901   70624 main.go:141] libmachine: (old-k8s-version-367072)     </disk>
	I0422 18:15:06.137918   70624 main.go:141] libmachine: (old-k8s-version-367072)     <disk type='file' device='disk'>
	I0422 18:15:06.137934   70624 main.go:141] libmachine: (old-k8s-version-367072)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 18:15:06.137951   70624 main.go:141] libmachine: (old-k8s-version-367072)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/old-k8s-version-367072.rawdisk'/>
	I0422 18:15:06.137965   70624 main.go:141] libmachine: (old-k8s-version-367072)       <target dev='hda' bus='virtio'/>
	I0422 18:15:06.137971   70624 main.go:141] libmachine: (old-k8s-version-367072)     </disk>
	I0422 18:15:06.137981   70624 main.go:141] libmachine: (old-k8s-version-367072)     <interface type='network'>
	I0422 18:15:06.137992   70624 main.go:141] libmachine: (old-k8s-version-367072)       <source network='mk-old-k8s-version-367072'/>
	I0422 18:15:06.138001   70624 main.go:141] libmachine: (old-k8s-version-367072)       <model type='virtio'/>
	I0422 18:15:06.138012   70624 main.go:141] libmachine: (old-k8s-version-367072)     </interface>
	I0422 18:15:06.138040   70624 main.go:141] libmachine: (old-k8s-version-367072)     <interface type='network'>
	I0422 18:15:06.138061   70624 main.go:141] libmachine: (old-k8s-version-367072)       <source network='default'/>
	I0422 18:15:06.138071   70624 main.go:141] libmachine: (old-k8s-version-367072)       <model type='virtio'/>
	I0422 18:15:06.138078   70624 main.go:141] libmachine: (old-k8s-version-367072)     </interface>
	I0422 18:15:06.138088   70624 main.go:141] libmachine: (old-k8s-version-367072)     <serial type='pty'>
	I0422 18:15:06.138096   70624 main.go:141] libmachine: (old-k8s-version-367072)       <target port='0'/>
	I0422 18:15:06.138105   70624 main.go:141] libmachine: (old-k8s-version-367072)     </serial>
	I0422 18:15:06.138110   70624 main.go:141] libmachine: (old-k8s-version-367072)     <console type='pty'>
	I0422 18:15:06.138116   70624 main.go:141] libmachine: (old-k8s-version-367072)       <target type='serial' port='0'/>
	I0422 18:15:06.138122   70624 main.go:141] libmachine: (old-k8s-version-367072)     </console>
	I0422 18:15:06.138132   70624 main.go:141] libmachine: (old-k8s-version-367072)     <rng model='virtio'>
	I0422 18:15:06.138146   70624 main.go:141] libmachine: (old-k8s-version-367072)       <backend model='random'>/dev/random</backend>
	I0422 18:15:06.138166   70624 main.go:141] libmachine: (old-k8s-version-367072)     </rng>
	I0422 18:15:06.138172   70624 main.go:141] libmachine: (old-k8s-version-367072)     
	I0422 18:15:06.138180   70624 main.go:141] libmachine: (old-k8s-version-367072)     
	I0422 18:15:06.138187   70624 main.go:141] libmachine: (old-k8s-version-367072)   </devices>
	I0422 18:15:06.138195   70624 main.go:141] libmachine: (old-k8s-version-367072) </domain>
	I0422 18:15:06.138199   70624 main.go:141] libmachine: (old-k8s-version-367072) 
	I0422 18:15:06.142592   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:00:dd:8a in network default
	I0422 18:15:06.143372   70624 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:15:06.143411   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:06.144163   70624 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:15:06.144516   70624 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:15:06.145050   70624 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:15:06.145796   70624 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:15:07.460551   70624 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:15:07.461380   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:07.461865   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:07.461885   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:07.461840   70646 retry.go:31] will retry after 300.118483ms: waiting for machine to come up
	I0422 18:15:07.763362   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:07.764006   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:07.764034   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:07.763952   70646 retry.go:31] will retry after 318.314093ms: waiting for machine to come up
	I0422 18:15:08.084484   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:08.085028   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:08.085058   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:08.084975   70646 retry.go:31] will retry after 334.27639ms: waiting for machine to come up
	I0422 18:15:08.421587   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:08.422225   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:08.422259   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:08.422156   70646 retry.go:31] will retry after 446.758979ms: waiting for machine to come up
	I0422 18:15:08.870915   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:08.871906   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:08.871929   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:08.871616   70646 retry.go:31] will retry after 544.631271ms: waiting for machine to come up
	I0422 18:15:09.419152   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:09.419911   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:09.419935   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:09.419823   70646 retry.go:31] will retry after 886.785954ms: waiting for machine to come up
	I0422 18:15:10.308263   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:10.308799   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:10.308827   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:10.308746   70646 retry.go:31] will retry after 841.779806ms: waiting for machine to come up
	I0422 18:15:11.152266   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:11.153261   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:11.153293   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:11.153175   70646 retry.go:31] will retry after 1.181330081s: waiting for machine to come up
	I0422 18:15:12.336292   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:12.336934   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:12.336964   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:12.336915   70646 retry.go:31] will retry after 1.485855113s: waiting for machine to come up
	I0422 18:15:13.824472   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:13.825097   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:13.825126   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:13.825001   70646 retry.go:31] will retry after 2.327106352s: waiting for machine to come up
	I0422 18:15:16.153493   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:16.155236   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:16.155261   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:16.155182   70646 retry.go:31] will retry after 2.698896485s: waiting for machine to come up
	I0422 18:15:18.857791   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:18.858623   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:18.858643   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:18.858531   70646 retry.go:31] will retry after 2.780670501s: waiting for machine to come up
	I0422 18:15:21.641219   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:21.641681   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:21.641710   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:21.641641   70646 retry.go:31] will retry after 3.170110693s: waiting for machine to come up
	I0422 18:15:24.814099   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:24.814709   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:15:24.814755   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:15:24.814682   70646 retry.go:31] will retry after 5.306249298s: waiting for machine to come up
	I0422 18:15:30.124190   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:30.124811   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:30.124846   70624 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:15:30.124859   70624 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:15:30.125189   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072
	I0422 18:15:30.207013   70624 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:15:30.207046   70624 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:15:30.207056   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:15:30.210183   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:30.210518   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072
	I0422 18:15:30.210545   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find defined IP address of network mk-old-k8s-version-367072 interface with MAC address 52:54:00:82:9f:b2
	I0422 18:15:30.210646   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:15:30.210676   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:15:30.210712   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:15:30.210743   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:15:30.210757   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:15:30.214557   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: exit status 255: 
	I0422 18:15:30.214588   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0422 18:15:30.214596   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | command : exit 0
	I0422 18:15:30.214607   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | err     : exit status 255
	I0422 18:15:30.214641   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | output  : 
	I0422 18:15:33.216357   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:15:33.218985   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.219408   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.219434   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.219574   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:15:33.219587   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:15:33.219608   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:15:33.219617   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:15:33.219625   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:15:33.343358   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:15:33.343647   70624 main.go:141] libmachine: (old-k8s-version-367072) KVM machine creation complete!
	I0422 18:15:33.343976   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:15:33.344446   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:33.344669   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:33.344821   70624 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0422 18:15:33.344837   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:15:33.346185   70624 main.go:141] libmachine: Detecting operating system of created instance...
	I0422 18:15:33.346200   70624 main.go:141] libmachine: Waiting for SSH to be available...
	I0422 18:15:33.346206   70624 main.go:141] libmachine: Getting to WaitForSSH function...
	I0422 18:15:33.346212   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:33.348726   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.349132   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.349157   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.349326   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:33.349481   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.349645   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.349749   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:33.349952   70624 main.go:141] libmachine: Using SSH client type: native
	I0422 18:15:33.350124   70624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:15:33.350135   70624 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0422 18:15:33.454657   70624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:15:33.454697   70624 main.go:141] libmachine: Detecting the provisioner...
	I0422 18:15:33.454724   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:33.457720   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.458122   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.458161   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.458260   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:33.458476   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.458681   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.458854   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:33.459029   70624 main.go:141] libmachine: Using SSH client type: native
	I0422 18:15:33.459276   70624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:15:33.459291   70624 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0422 18:15:33.568842   70624 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0422 18:15:33.568949   70624 main.go:141] libmachine: found compatible host: buildroot
	I0422 18:15:33.568977   70624 main.go:141] libmachine: Provisioning with buildroot...
	I0422 18:15:33.568994   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:15:33.569256   70624 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:15:33.569315   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:15:33.569530   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:33.572055   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.572507   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.572541   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.572661   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:33.572886   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.573051   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.573278   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:33.573462   70624 main.go:141] libmachine: Using SSH client type: native
	I0422 18:15:33.573675   70624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:15:33.573694   70624 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:15:33.694895   70624 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:15:33.694929   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:33.697661   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.698036   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.698065   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.698256   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:33.698444   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.698630   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.698781   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:33.698966   70624 main.go:141] libmachine: Using SSH client type: native
	I0422 18:15:33.699156   70624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:15:33.699181   70624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:15:33.818813   70624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:15:33.818846   70624 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:15:33.818865   70624 buildroot.go:174] setting up certificates
	I0422 18:15:33.818877   70624 provision.go:84] configureAuth start
	I0422 18:15:33.818886   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:15:33.819194   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:15:33.822143   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.822547   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.822582   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.822680   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:33.825160   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.825497   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.825530   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.825710   70624 provision.go:143] copyHostCerts
	I0422 18:15:33.825773   70624 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:15:33.825783   70624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:15:33.825856   70624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:15:33.825951   70624 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:15:33.825958   70624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:15:33.825981   70624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:15:33.826048   70624 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:15:33.826061   70624 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:15:33.826092   70624 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:15:33.826165   70624 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:15:33.950766   70624 provision.go:177] copyRemoteCerts
	I0422 18:15:33.950819   70624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:15:33.950846   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:33.954130   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.954500   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:33.954547   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:33.954678   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:33.954884   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:33.955090   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:33.955260   70624 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:15:34.039278   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:15:34.067428   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:15:34.098199   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:15:34.128206   70624 provision.go:87] duration metric: took 309.318014ms to configureAuth
	I0422 18:15:34.128236   70624 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:15:34.128398   70624 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:15:34.128460   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:34.130867   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.131184   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.131216   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.131367   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:34.131570   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.131739   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.131890   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:34.132082   70624 main.go:141] libmachine: Using SSH client type: native
	I0422 18:15:34.132244   70624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:15:34.132265   70624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:15:34.410770   70624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:15:34.410805   70624 main.go:141] libmachine: Checking connection to Docker...
	I0422 18:15:34.410815   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetURL
	I0422 18:15:34.412190   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using libvirt version 6000000
	I0422 18:15:34.415402   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.415786   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.415809   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.416046   70624 main.go:141] libmachine: Docker is up and running!
	I0422 18:15:34.416064   70624 main.go:141] libmachine: Reticulating splines...
	I0422 18:15:34.416071   70624 client.go:171] duration metric: took 28.835997512s to LocalClient.Create
	I0422 18:15:34.416099   70624 start.go:167] duration metric: took 28.836062466s to libmachine.API.Create "old-k8s-version-367072"
	I0422 18:15:34.416112   70624 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:15:34.416125   70624 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:15:34.416146   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:34.416402   70624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:15:34.416423   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:34.419001   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.419410   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.419469   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.419583   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:34.419796   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.419947   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:34.420096   70624 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:15:34.502489   70624 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:15:34.507003   70624 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:15:34.507039   70624 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:15:34.507100   70624 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:15:34.507208   70624 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:15:34.507332   70624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:15:34.518496   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:15:34.544362   70624 start.go:296] duration metric: took 128.235074ms for postStartSetup
	I0422 18:15:34.544416   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:15:34.545007   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:15:34.547551   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.547900   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.547944   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.548170   70624 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:15:34.548362   70624 start.go:128] duration metric: took 28.987785707s to createHost
	I0422 18:15:34.548387   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:34.550534   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.550880   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.550903   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.551036   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:34.551266   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.551425   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.551577   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:34.551711   70624 main.go:141] libmachine: Using SSH client type: native
	I0422 18:15:34.551896   70624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:15:34.551907   70624 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 18:15:34.660246   70624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713809734.646618488
	
	I0422 18:15:34.660270   70624 fix.go:216] guest clock: 1713809734.646618488
	I0422 18:15:34.660279   70624 fix.go:229] Guest: 2024-04-22 18:15:34.646618488 +0000 UTC Remote: 2024-04-22 18:15:34.548375764 +0000 UTC m=+29.120172985 (delta=98.242724ms)
	I0422 18:15:34.660316   70624 fix.go:200] guest clock delta is within tolerance: 98.242724ms
	I0422 18:15:34.660322   70624 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 29.099825936s
	I0422 18:15:34.660348   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:34.660650   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:15:34.663591   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.664090   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.664126   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.664371   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:34.665022   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:34.665206   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:15:34.665306   70624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:15:34.665343   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:34.665423   70624 ssh_runner.go:195] Run: cat /version.json
	I0422 18:15:34.665446   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:15:34.668358   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.668486   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.668754   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.668782   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.669041   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:34.669045   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:34.669072   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:34.669245   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.669324   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:15:34.669411   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:34.669482   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:15:34.669555   70624 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:15:34.669636   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:15:34.669778   70624 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:15:34.749166   70624 ssh_runner.go:195] Run: systemctl --version
	I0422 18:15:34.787066   70624 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:15:34.966723   70624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:15:34.973495   70624 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:15:34.973574   70624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:15:34.992895   70624 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:15:34.992927   70624 start.go:494] detecting cgroup driver to use...
	I0422 18:15:34.992996   70624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:15:35.015683   70624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:15:35.033181   70624 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:15:35.033244   70624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:15:35.048924   70624 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:15:35.064903   70624 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:15:35.198401   70624 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:15:35.388403   70624 docker.go:233] disabling docker service ...
	I0422 18:15:35.388489   70624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:15:35.404559   70624 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:15:35.419767   70624 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:15:35.541029   70624 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:15:35.675827   70624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:15:35.691611   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:15:35.711040   70624 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:15:35.711114   70624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:15:35.722274   70624 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:15:35.722357   70624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:15:35.733644   70624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:15:35.744363   70624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:15:35.755989   70624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:15:35.767199   70624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:15:35.778267   70624 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:15:35.778342   70624 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:15:35.793144   70624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:15:35.803815   70624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:15:35.942923   70624 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:15:36.415977   70624 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:15:36.416057   70624 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:15:36.421817   70624 start.go:562] Will wait 60s for crictl version
	I0422 18:15:36.421881   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:36.426509   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:15:36.475094   70624 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:15:36.475224   70624 ssh_runner.go:195] Run: crio --version
	I0422 18:15:36.513597   70624 ssh_runner.go:195] Run: crio --version
	I0422 18:15:36.615834   70624 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:15:36.623420   70624 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:15:36.627037   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:36.627569   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:15:22 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:15:36.627600   70624 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:15:36.627899   70624 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:15:36.633632   70624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:15:36.649577   70624 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:15:36.649689   70624 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:15:36.649743   70624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:15:36.692981   70624 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:15:36.693075   70624 ssh_runner.go:195] Run: which lz4
	I0422 18:15:36.697738   70624 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0422 18:15:36.702634   70624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:15:36.702668   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:15:38.687028   70624 crio.go:462] duration metric: took 1.989312901s to copy over tarball
	I0422 18:15:38.687099   70624 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:15:41.882647   70624 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.195525982s)
	I0422 18:15:41.882679   70624 crio.go:469] duration metric: took 3.195623705s to extract the tarball
	I0422 18:15:41.882688   70624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:15:41.928527   70624 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:15:41.996186   70624 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:15:41.996228   70624 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:15:41.996311   70624 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:15:41.996356   70624 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:15:41.996353   70624 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:15:41.996390   70624 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:15:41.996417   70624 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:15:41.996398   70624 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:15:41.996502   70624 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:15:41.996337   70624 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:15:41.997904   70624 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:15:41.997918   70624 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:15:41.997960   70624 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:15:41.998003   70624 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:15:41.998018   70624 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:15:41.998153   70624 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:15:41.998348   70624 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:15:41.998559   70624 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:15:42.215750   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:15:42.241733   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:15:42.245172   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:15:42.247528   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:15:42.247819   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:15:42.252459   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:15:42.274365   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:15:42.294622   70624 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:15:42.294675   70624 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:15:42.294719   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.412947   70624 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:15:42.413005   70624 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:15:42.413069   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.427672   70624 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:15:42.427724   70624 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:15:42.427734   70624 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:15:42.427762   70624 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:15:42.427800   70624 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:15:42.427837   70624 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:15:42.427876   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.427916   70624 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:15:42.427987   70624 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:15:42.428024   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.427812   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.428101   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.454879   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:15:42.454973   70624 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:15:42.455020   70624 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:15:42.455025   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:15:42.455062   70624 ssh_runner.go:195] Run: which crictl
	I0422 18:15:42.455120   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:15:42.455174   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:15:42.455238   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:15:42.455257   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:15:42.598307   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:15:42.624213   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:15:42.624261   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:15:42.624303   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:15:42.624390   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:15:42.624474   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:15:42.624539   70624 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:15:42.667639   70624 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:15:42.818414   70624 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:15:42.972769   70624 cache_images.go:92] duration metric: took 976.519779ms to LoadCachedImages
	W0422 18:15:42.972878   70624 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0422 18:15:42.972895   70624 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:15:42.973035   70624 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:15:42.973121   70624 ssh_runner.go:195] Run: crio config
	I0422 18:15:43.030056   70624 cni.go:84] Creating CNI manager for ""
	I0422 18:15:43.030085   70624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:15:43.030102   70624 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:15:43.030125   70624 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:15:43.030301   70624 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:15:43.030390   70624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:15:43.044882   70624 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:15:43.044953   70624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:15:43.055354   70624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:15:43.081236   70624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:15:43.103547   70624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:15:43.125903   70624 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:15:43.130642   70624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:15:43.145098   70624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:15:43.292453   70624 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:15:43.318217   70624 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:15:43.318243   70624 certs.go:194] generating shared ca certs ...
	I0422 18:15:43.318259   70624 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:43.318416   70624 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:15:43.318454   70624 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:15:43.318464   70624 certs.go:256] generating profile certs ...
	I0422 18:15:43.318520   70624 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:15:43.318533   70624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.crt with IP's: []
	I0422 18:15:43.615741   70624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.crt ...
	I0422 18:15:43.615775   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.crt: {Name:mk99ee7081b4d820f4f0cdb95ff821b8e5544706 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:43.615979   70624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key ...
	I0422 18:15:43.616010   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key: {Name:mk1ca290a65fe87f287a91cddf38e6d7c69b5336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:43.616135   70624 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:15:43.616159   70624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt.653b7478 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.149]
	I0422 18:15:43.773496   70624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt.653b7478 ...
	I0422 18:15:43.773530   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt.653b7478: {Name:mk2553333cabc06c1483410d4a6d30e69360cd44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:43.773683   70624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478 ...
	I0422 18:15:43.773696   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478: {Name:mk622a2847684716bd898ddc54a1fb96e6a8536f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:43.773760   70624 certs.go:381] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt.653b7478 -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt
	I0422 18:15:43.773834   70624 certs.go:385] copying /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478 -> /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key
	I0422 18:15:43.773885   70624 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:15:43.773904   70624 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt with IP's: []
	I0422 18:15:44.022613   70624 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt ...
	I0422 18:15:44.022647   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt: {Name:mk541f276b69834747468929f0d2b18d409eb2ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:44.022809   70624 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key ...
	I0422 18:15:44.022824   70624 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key: {Name:mkd95020b5acd83d6df34da962b44cc339ee9930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:15:44.022984   70624 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:15:44.023023   70624 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:15:44.023031   70624 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:15:44.023050   70624 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:15:44.023073   70624 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:15:44.023094   70624 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:15:44.023159   70624 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:15:44.023756   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:15:44.053586   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:15:44.080718   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:15:44.108613   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:15:44.135643   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:15:44.165370   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:15:44.194224   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:15:44.225602   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:15:44.262236   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:15:44.290339   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:15:44.320407   70624 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:15:44.349276   70624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:15:44.369402   70624 ssh_runner.go:195] Run: openssl version
	I0422 18:15:44.375984   70624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:15:44.389513   70624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:15:44.394885   70624 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:15:44.394950   70624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:15:44.401035   70624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:15:44.413494   70624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:15:44.426027   70624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:15:44.431050   70624 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:15:44.431111   70624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:15:44.437763   70624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:15:44.453112   70624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:15:44.467577   70624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:15:44.473342   70624 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:15:44.473408   70624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:15:44.480089   70624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:15:44.493058   70624 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:15:44.498095   70624 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 18:15:44.498159   70624 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:15:44.498262   70624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:15:44.498319   70624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:15:44.546845   70624 cri.go:89] found id: ""
	I0422 18:15:44.546937   70624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 18:15:44.560184   70624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:15:44.572400   70624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:15:44.584636   70624 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:15:44.584659   70624 kubeadm.go:156] found existing configuration files:
	
	I0422 18:15:44.584730   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:15:44.597464   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:15:44.597521   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:15:44.610920   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:15:44.623779   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:15:44.623843   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:15:44.637004   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:15:44.649635   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:15:44.649690   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:15:44.663096   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:15:44.673756   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:15:44.673825   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:15:44.684762   70624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:15:44.796889   70624 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:15:44.796976   70624 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:15:44.973954   70624 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:15:44.974095   70624 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:15:44.974215   70624 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:15:45.195780   70624 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:15:45.198959   70624 out.go:204]   - Generating certificates and keys ...
	I0422 18:15:45.199099   70624 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:15:45.199243   70624 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:15:45.395359   70624 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 18:15:45.715373   70624 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 18:15:46.071552   70624 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 18:15:46.237015   70624 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 18:15:46.409703   70624 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 18:15:46.410096   70624 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-367072] and IPs [192.168.72.149 127.0.0.1 ::1]
	I0422 18:15:46.473964   70624 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 18:15:46.474380   70624 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-367072] and IPs [192.168.72.149 127.0.0.1 ::1]
	I0422 18:15:46.745438   70624 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 18:15:46.940071   70624 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 18:15:47.133613   70624 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 18:15:47.133734   70624 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:15:47.357516   70624 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:15:47.522905   70624 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:15:47.701690   70624 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:15:47.856441   70624 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:15:47.883573   70624 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:15:47.883697   70624 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:15:47.883749   70624 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:15:48.030990   70624 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:15:48.033018   70624 out.go:204]   - Booting up control plane ...
	I0422 18:15:48.033190   70624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:15:48.045307   70624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:15:48.048173   70624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:15:48.049583   70624 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:15:48.058800   70624 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:16:28.057582   70624 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:16:28.058331   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:16:28.058527   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:16:33.060144   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:16:33.060631   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:16:43.061592   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:16:43.061856   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:17:03.062798   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:17:03.063070   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:17:43.062805   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:17:43.063135   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:17:43.063159   70624 kubeadm.go:309] 
	I0422 18:17:43.063233   70624 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:17:43.063293   70624 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:17:43.063303   70624 kubeadm.go:309] 
	I0422 18:17:43.063356   70624 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:17:43.063404   70624 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:17:43.063504   70624 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:17:43.063512   70624 kubeadm.go:309] 
	I0422 18:17:43.063602   70624 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:17:43.063632   70624 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:17:43.063661   70624 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:17:43.063667   70624 kubeadm.go:309] 
	I0422 18:17:43.063777   70624 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:17:43.063874   70624 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:17:43.063883   70624 kubeadm.go:309] 
	I0422 18:17:43.063967   70624 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:17:43.064075   70624 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:17:43.064157   70624 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:17:43.064223   70624 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:17:43.064231   70624 kubeadm.go:309] 
	I0422 18:17:43.065085   70624 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:17:43.065200   70624 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:17:43.065277   70624 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:17:43.065425   70624 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-367072] and IPs [192.168.72.149 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-367072] and IPs [192.168.72.149 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-367072] and IPs [192.168.72.149 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-367072] and IPs [192.168.72.149 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:17:43.065485   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:17:45.698580   70624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.633068038s)
	I0422 18:17:45.698656   70624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:17:45.714299   70624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:17:45.724718   70624 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:17:45.724742   70624 kubeadm.go:156] found existing configuration files:
	
	I0422 18:17:45.724785   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:17:45.735052   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:17:45.735106   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:17:45.745692   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:17:45.755319   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:17:45.755380   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:17:45.766445   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:17:45.776642   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:17:45.776723   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:17:45.787549   70624 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:17:45.797945   70624 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:17:45.798015   70624 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:17:45.811767   70624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:17:46.038208   70624 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:19:41.956404   70624 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:19:41.956537   70624 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:19:41.957571   70624 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:19:41.957618   70624 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:19:41.957717   70624 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:19:41.957850   70624 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:19:41.957965   70624 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:19:41.958058   70624 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:19:41.959771   70624 out.go:204]   - Generating certificates and keys ...
	I0422 18:19:41.959866   70624 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:19:41.959942   70624 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:19:41.960036   70624 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:19:41.960158   70624 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:19:41.960260   70624 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:19:41.960337   70624 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:19:41.960429   70624 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:19:41.960519   70624 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:19:41.960615   70624 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:19:41.960734   70624 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:19:41.960793   70624 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:19:41.960875   70624 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:19:41.960973   70624 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:19:41.961051   70624 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:19:41.961143   70624 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:19:41.961223   70624 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:19:41.961355   70624 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:19:41.961472   70624 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:19:41.961532   70624 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:19:41.961624   70624 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:19:41.963396   70624 out.go:204]   - Booting up control plane ...
	I0422 18:19:41.963504   70624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:19:41.963596   70624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:19:41.963674   70624 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:19:41.963753   70624 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:19:41.963919   70624 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:19:41.964007   70624 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:19:41.964114   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:19:41.964312   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:19:41.964413   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:19:41.964618   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:19:41.964689   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:19:41.964854   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:19:41.964921   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:19:41.965090   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:19:41.965191   70624 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:19:41.965361   70624 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:19:41.965372   70624 kubeadm.go:309] 
	I0422 18:19:41.965409   70624 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:19:41.965444   70624 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:19:41.965450   70624 kubeadm.go:309] 
	I0422 18:19:41.965481   70624 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:19:41.965510   70624 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:19:41.965600   70624 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:19:41.965610   70624 kubeadm.go:309] 
	I0422 18:19:41.965695   70624 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:19:41.965724   70624 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:19:41.965753   70624 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:19:41.965759   70624 kubeadm.go:309] 
	I0422 18:19:41.965851   70624 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:19:41.965972   70624 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:19:41.965984   70624 kubeadm.go:309] 
	I0422 18:19:41.966134   70624 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:19:41.966231   70624 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:19:41.966300   70624 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:19:41.966379   70624 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:19:41.966443   70624 kubeadm.go:309] 
	I0422 18:19:41.966502   70624 kubeadm.go:393] duration metric: took 3m57.468347007s to StartCluster
	I0422 18:19:41.966548   70624 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:19:41.966608   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:19:42.009948   70624 cri.go:89] found id: ""
	I0422 18:19:42.009973   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.009983   70624 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:19:42.009991   70624 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:19:42.010058   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:19:42.045878   70624 cri.go:89] found id: ""
	I0422 18:19:42.045904   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.045912   70624 logs.go:278] No container was found matching "etcd"
	I0422 18:19:42.045917   70624 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:19:42.045997   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:19:42.084522   70624 cri.go:89] found id: ""
	I0422 18:19:42.084552   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.084559   70624 logs.go:278] No container was found matching "coredns"
	I0422 18:19:42.084564   70624 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:19:42.084608   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:19:42.120870   70624 cri.go:89] found id: ""
	I0422 18:19:42.120899   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.120906   70624 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:19:42.120912   70624 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:19:42.120970   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:19:42.156636   70624 cri.go:89] found id: ""
	I0422 18:19:42.156665   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.156677   70624 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:19:42.156685   70624 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:19:42.156746   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:19:42.193621   70624 cri.go:89] found id: ""
	I0422 18:19:42.193647   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.193660   70624 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:19:42.193668   70624 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:19:42.193724   70624 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:19:42.230231   70624 cri.go:89] found id: ""
	I0422 18:19:42.230256   70624 logs.go:276] 0 containers: []
	W0422 18:19:42.230264   70624 logs.go:278] No container was found matching "kindnet"
	I0422 18:19:42.230273   70624 logs.go:123] Gathering logs for kubelet ...
	I0422 18:19:42.230284   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:19:42.279375   70624 logs.go:123] Gathering logs for dmesg ...
	I0422 18:19:42.279405   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:19:42.293656   70624 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:19:42.293687   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:19:42.410945   70624 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:19:42.410977   70624 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:19:42.410993   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:19:42.514595   70624 logs.go:123] Gathering logs for container status ...
	I0422 18:19:42.514632   70624 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0422 18:19:42.570579   70624 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:19:42.570624   70624 out.go:239] * 
	* 
	W0422 18:19:42.570682   70624 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:19:42.570709   70624 out.go:239] * 
	* 
	W0422 18:19:42.571586   70624 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:19:42.575548   70624 out.go:177] 
	W0422 18:19:42.576992   70624 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:19:42.577051   70624 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:19:42.577073   70624 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:19:42.578824   70624 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-367072 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 6 (234.336742ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:42.859036   77442 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-367072" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (277.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-407991 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-407991 --alsologtostderr -v=3: exit status 82 (2m0.571597565s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-407991"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:17:09.271297   75999 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:17:09.271845   75999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:17:09.271863   75999 out.go:304] Setting ErrFile to fd 2...
	I0422 18:17:09.271871   75999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:17:09.273005   75999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:17:09.273444   75999 out.go:298] Setting JSON to false
	I0422 18:17:09.273520   75999 mustload.go:65] Loading cluster: no-preload-407991
	I0422 18:17:09.273895   75999 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:17:09.273957   75999 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:17:09.274126   75999 mustload.go:65] Loading cluster: no-preload-407991
	I0422 18:17:09.274222   75999 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:17:09.274272   75999 stop.go:39] StopHost: no-preload-407991
	I0422 18:17:09.274625   75999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:17:09.274665   75999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:17:09.290045   75999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40697
	I0422 18:17:09.290714   75999 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:17:09.291411   75999 main.go:141] libmachine: Using API Version  1
	I0422 18:17:09.291445   75999 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:17:09.291864   75999 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:17:09.294331   75999 out.go:177] * Stopping node "no-preload-407991"  ...
	I0422 18:17:09.295611   75999 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 18:17:09.295656   75999 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:17:09.295891   75999 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 18:17:09.295932   75999 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:17:09.299347   75999 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:17:09.299857   75999 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:17:09.299889   75999 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:17:09.299905   75999 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:17:09.300097   75999 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:17:09.300267   75999 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:17:09.300431   75999 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:17:09.426159   75999 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 18:17:09.497031   75999 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 18:17:09.559213   75999 main.go:141] libmachine: Stopping "no-preload-407991"...
	I0422 18:17:09.559249   75999 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:17:09.561351   75999 main.go:141] libmachine: (no-preload-407991) Calling .Stop
	I0422 18:17:09.566111   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 0/120
	I0422 18:17:10.567477   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 1/120
	I0422 18:17:11.569892   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 2/120
	I0422 18:17:12.571796   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 3/120
	I0422 18:17:13.573256   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 4/120
	I0422 18:17:14.575599   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 5/120
	I0422 18:17:15.576975   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 6/120
	I0422 18:17:16.579297   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 7/120
	I0422 18:17:17.580651   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 8/120
	I0422 18:17:18.582285   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 9/120
	I0422 18:17:19.583696   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 10/120
	I0422 18:17:20.585793   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 11/120
	I0422 18:17:21.587178   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 12/120
	I0422 18:17:22.588812   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 13/120
	I0422 18:17:23.590131   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 14/120
	I0422 18:17:24.592078   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 15/120
	I0422 18:17:25.593441   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 16/120
	I0422 18:17:26.594767   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 17/120
	I0422 18:17:27.596365   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 18/120
	I0422 18:17:28.597640   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 19/120
	I0422 18:17:29.599615   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 20/120
	I0422 18:17:30.601042   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 21/120
	I0422 18:17:31.602549   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 22/120
	I0422 18:17:32.603888   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 23/120
	I0422 18:17:33.605567   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 24/120
	I0422 18:17:34.607701   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 25/120
	I0422 18:17:35.609746   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 26/120
	I0422 18:17:36.611345   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 27/120
	I0422 18:17:37.613766   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 28/120
	I0422 18:17:38.615087   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 29/120
	I0422 18:17:39.617256   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 30/120
	I0422 18:17:40.618659   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 31/120
	I0422 18:17:41.619862   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 32/120
	I0422 18:17:42.622349   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 33/120
	I0422 18:17:43.623790   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 34/120
	I0422 18:17:44.625505   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 35/120
	I0422 18:17:45.626945   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 36/120
	I0422 18:17:46.629588   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 37/120
	I0422 18:17:47.630943   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 38/120
	I0422 18:17:48.632279   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 39/120
	I0422 18:17:49.633696   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 40/120
	I0422 18:17:50.635151   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 41/120
	I0422 18:17:51.636466   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 42/120
	I0422 18:17:52.638207   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 43/120
	I0422 18:17:53.639706   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 44/120
	I0422 18:17:54.641865   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 45/120
	I0422 18:17:55.643430   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 46/120
	I0422 18:17:56.644813   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 47/120
	I0422 18:17:57.646379   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 48/120
	I0422 18:17:58.647838   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 49/120
	I0422 18:17:59.650295   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 50/120
	I0422 18:18:00.651608   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 51/120
	I0422 18:18:01.652979   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 52/120
	I0422 18:18:02.654702   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 53/120
	I0422 18:18:03.656054   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 54/120
	I0422 18:18:04.657592   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 55/120
	I0422 18:18:05.659461   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 56/120
	I0422 18:18:06.660831   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 57/120
	I0422 18:18:07.662247   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 58/120
	I0422 18:18:08.663825   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 59/120
	I0422 18:18:09.665698   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 60/120
	I0422 18:18:10.667064   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 61/120
	I0422 18:18:11.668457   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 62/120
	I0422 18:18:12.669905   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 63/120
	I0422 18:18:13.671410   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 64/120
	I0422 18:18:14.673701   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 65/120
	I0422 18:18:15.675254   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 66/120
	I0422 18:18:16.676760   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 67/120
	I0422 18:18:17.678365   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 68/120
	I0422 18:18:18.679810   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 69/120
	I0422 18:18:19.681185   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 70/120
	I0422 18:18:20.682729   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 71/120
	I0422 18:18:21.684069   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 72/120
	I0422 18:18:22.685494   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 73/120
	I0422 18:18:23.687233   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 74/120
	I0422 18:18:24.689425   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 75/120
	I0422 18:18:25.690870   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 76/120
	I0422 18:18:26.692410   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 77/120
	I0422 18:18:27.693926   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 78/120
	I0422 18:18:28.695477   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 79/120
	I0422 18:18:29.697793   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 80/120
	I0422 18:18:30.699284   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 81/120
	I0422 18:18:31.700859   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 82/120
	I0422 18:18:32.702356   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 83/120
	I0422 18:18:33.703946   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 84/120
	I0422 18:18:34.706033   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 85/120
	I0422 18:18:35.707694   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 86/120
	I0422 18:18:36.709092   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 87/120
	I0422 18:18:37.710434   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 88/120
	I0422 18:18:38.711821   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 89/120
	I0422 18:18:39.714548   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 90/120
	I0422 18:18:40.716626   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 91/120
	I0422 18:18:41.718105   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 92/120
	I0422 18:18:42.719729   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 93/120
	I0422 18:18:43.721213   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 94/120
	I0422 18:18:44.723648   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 95/120
	I0422 18:18:45.725098   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 96/120
	I0422 18:18:46.726848   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 97/120
	I0422 18:18:47.728608   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 98/120
	I0422 18:18:48.730163   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 99/120
	I0422 18:18:49.731499   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 100/120
	I0422 18:18:50.733094   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 101/120
	I0422 18:18:51.734475   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 102/120
	I0422 18:18:52.735883   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 103/120
	I0422 18:18:53.737480   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 104/120
	I0422 18:18:54.739808   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 105/120
	I0422 18:18:55.741351   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 106/120
	I0422 18:18:56.742860   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 107/120
	I0422 18:18:57.744318   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 108/120
	I0422 18:18:58.746178   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 109/120
	I0422 18:18:59.747636   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 110/120
	I0422 18:19:00.749000   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 111/120
	I0422 18:19:01.750396   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 112/120
	I0422 18:19:02.751758   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 113/120
	I0422 18:19:03.753339   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 114/120
	I0422 18:19:04.755319   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 115/120
	I0422 18:19:05.756921   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 116/120
	I0422 18:19:06.758421   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 117/120
	I0422 18:19:07.759975   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 118/120
	I0422 18:19:08.761490   75999 main.go:141] libmachine: (no-preload-407991) Waiting for machine to stop 119/120
	I0422 18:19:09.762897   75999 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 18:19:09.762944   75999 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 18:19:09.765116   75999 out.go:177] 
	W0422 18:19:09.766542   75999 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 18:19:09.766582   75999 out.go:239] * 
	* 
	W0422 18:19:09.769264   75999 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:19:09.770705   75999 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-407991 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991: exit status 3 (18.475243025s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:28.247437   77107 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host
	E0422 18:19:28.247461   77107 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-407991" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-782377 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-782377 --alsologtostderr -v=3: exit status 82 (2m0.54631047s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-782377"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:17:16.853253   76084 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:17:16.853378   76084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:17:16.853389   76084 out.go:304] Setting ErrFile to fd 2...
	I0422 18:17:16.853395   76084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:17:16.853692   76084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:17:16.854013   76084 out.go:298] Setting JSON to false
	I0422 18:17:16.854152   76084 mustload.go:65] Loading cluster: embed-certs-782377
	I0422 18:17:16.854561   76084 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:17:16.854642   76084 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:17:16.855070   76084 mustload.go:65] Loading cluster: embed-certs-782377
	I0422 18:17:16.855221   76084 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:17:16.855258   76084 stop.go:39] StopHost: embed-certs-782377
	I0422 18:17:16.855704   76084 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:17:16.855752   76084 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:17:16.871715   76084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43327
	I0422 18:17:16.872291   76084 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:17:16.872964   76084 main.go:141] libmachine: Using API Version  1
	I0422 18:17:16.872994   76084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:17:16.873350   76084 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:17:16.875554   76084 out.go:177] * Stopping node "embed-certs-782377"  ...
	I0422 18:17:16.876895   76084 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 18:17:16.876943   76084 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:17:16.877170   76084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 18:17:16.877196   76084 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:17:16.880245   76084 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:17:16.880714   76084 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:16:18 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:17:16.880755   76084 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:17:16.880860   76084 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:17:16.881067   76084 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:17:16.881235   76084 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:17:16.881379   76084 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:17:16.998260   76084 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 18:17:17.052334   76084 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 18:17:17.120679   76084 main.go:141] libmachine: Stopping "embed-certs-782377"...
	I0422 18:17:17.120715   76084 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:17:17.122589   76084 main.go:141] libmachine: (embed-certs-782377) Calling .Stop
	I0422 18:17:17.126694   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 0/120
	I0422 18:17:18.128464   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 1/120
	I0422 18:17:19.129976   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 2/120
	I0422 18:17:20.131360   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 3/120
	I0422 18:17:21.132512   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 4/120
	I0422 18:17:22.134659   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 5/120
	I0422 18:17:23.136226   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 6/120
	I0422 18:17:24.137503   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 7/120
	I0422 18:17:25.139297   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 8/120
	I0422 18:17:26.140717   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 9/120
	I0422 18:17:27.142928   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 10/120
	I0422 18:17:28.144512   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 11/120
	I0422 18:17:29.146114   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 12/120
	I0422 18:17:30.147591   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 13/120
	I0422 18:17:31.149676   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 14/120
	I0422 18:17:32.152022   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 15/120
	I0422 18:17:33.153603   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 16/120
	I0422 18:17:34.155559   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 17/120
	I0422 18:17:35.157052   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 18/120
	I0422 18:17:36.158685   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 19/120
	I0422 18:17:37.161481   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 20/120
	I0422 18:17:38.163021   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 21/120
	I0422 18:17:39.164501   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 22/120
	I0422 18:17:40.166028   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 23/120
	I0422 18:17:41.167840   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 24/120
	I0422 18:17:42.169867   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 25/120
	I0422 18:17:43.171101   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 26/120
	I0422 18:17:44.172893   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 27/120
	I0422 18:17:45.174573   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 28/120
	I0422 18:17:46.176776   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 29/120
	I0422 18:17:47.178200   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 30/120
	I0422 18:17:48.179867   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 31/120
	I0422 18:17:49.181857   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 32/120
	I0422 18:17:50.183482   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 33/120
	I0422 18:17:51.185095   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 34/120
	I0422 18:17:52.186481   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 35/120
	I0422 18:17:53.188109   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 36/120
	I0422 18:17:54.189887   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 37/120
	I0422 18:17:55.191345   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 38/120
	I0422 18:17:56.192907   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 39/120
	I0422 18:17:57.195438   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 40/120
	I0422 18:17:58.197308   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 41/120
	I0422 18:17:59.198877   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 42/120
	I0422 18:18:00.200484   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 43/120
	I0422 18:18:01.201988   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 44/120
	I0422 18:18:02.204595   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 45/120
	I0422 18:18:03.206329   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 46/120
	I0422 18:18:04.208008   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 47/120
	I0422 18:18:05.209358   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 48/120
	I0422 18:18:06.210904   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 49/120
	I0422 18:18:07.212412   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 50/120
	I0422 18:18:08.213869   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 51/120
	I0422 18:18:09.215187   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 52/120
	I0422 18:18:10.216721   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 53/120
	I0422 18:18:11.218142   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 54/120
	I0422 18:18:12.220412   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 55/120
	I0422 18:18:13.221999   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 56/120
	I0422 18:18:14.223555   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 57/120
	I0422 18:18:15.224982   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 58/120
	I0422 18:18:16.226319   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 59/120
	I0422 18:18:17.228701   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 60/120
	I0422 18:18:18.230165   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 61/120
	I0422 18:18:19.231551   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 62/120
	I0422 18:18:20.232966   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 63/120
	I0422 18:18:21.234229   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 64/120
	I0422 18:18:22.236283   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 65/120
	I0422 18:18:23.237824   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 66/120
	I0422 18:18:24.239155   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 67/120
	I0422 18:18:25.240921   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 68/120
	I0422 18:18:26.242229   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 69/120
	I0422 18:18:27.243827   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 70/120
	I0422 18:18:28.245423   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 71/120
	I0422 18:18:29.247107   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 72/120
	I0422 18:18:30.248450   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 73/120
	I0422 18:18:31.249825   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 74/120
	I0422 18:18:32.252030   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 75/120
	I0422 18:18:33.253549   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 76/120
	I0422 18:18:34.255203   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 77/120
	I0422 18:18:35.256708   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 78/120
	I0422 18:18:36.258244   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 79/120
	I0422 18:18:37.259819   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 80/120
	I0422 18:18:38.261553   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 81/120
	I0422 18:18:39.263007   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 82/120
	I0422 18:18:40.264674   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 83/120
	I0422 18:18:41.266086   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 84/120
	I0422 18:18:42.268369   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 85/120
	I0422 18:18:43.269977   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 86/120
	I0422 18:18:44.271586   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 87/120
	I0422 18:18:45.273137   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 88/120
	I0422 18:18:46.274519   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 89/120
	I0422 18:18:47.276728   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 90/120
	I0422 18:18:48.278203   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 91/120
	I0422 18:18:49.279696   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 92/120
	I0422 18:18:50.281159   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 93/120
	I0422 18:18:51.282610   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 94/120
	I0422 18:18:52.284904   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 95/120
	I0422 18:18:53.286278   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 96/120
	I0422 18:18:54.287980   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 97/120
	I0422 18:18:55.289707   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 98/120
	I0422 18:18:56.291256   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 99/120
	I0422 18:18:57.293619   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 100/120
	I0422 18:18:58.295045   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 101/120
	I0422 18:18:59.296582   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 102/120
	I0422 18:19:00.297918   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 103/120
	I0422 18:19:01.299589   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 104/120
	I0422 18:19:02.301805   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 105/120
	I0422 18:19:03.303189   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 106/120
	I0422 18:19:04.304613   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 107/120
	I0422 18:19:05.306007   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 108/120
	I0422 18:19:06.307344   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 109/120
	I0422 18:19:07.308677   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 110/120
	I0422 18:19:08.310234   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 111/120
	I0422 18:19:09.311677   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 112/120
	I0422 18:19:10.313219   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 113/120
	I0422 18:19:11.315185   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 114/120
	I0422 18:19:12.317288   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 115/120
	I0422 18:19:13.318878   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 116/120
	I0422 18:19:14.320430   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 117/120
	I0422 18:19:15.321856   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 118/120
	I0422 18:19:16.323338   76084 main.go:141] libmachine: (embed-certs-782377) Waiting for machine to stop 119/120
	I0422 18:19:17.324760   76084 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 18:19:17.324803   76084 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 18:19:17.326892   76084 out.go:177] 
	W0422 18:19:17.328462   76084 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 18:19:17.328480   76084 out.go:239] * 
	* 
	W0422 18:19:17.330937   76084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:19:17.332484   76084 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-782377 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377: exit status 3 (18.593474869s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:35.927572   77153 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0422 18:19:35.927598   77153 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-782377" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-856422 --alsologtostderr -v=3
E0422 18:17:55.736827   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:18:05.977044   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:18:09.193731   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.199061   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.209344   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.230528   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.270901   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.351375   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.511834   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:09.832070   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:10.473158   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:11.753836   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:14.314981   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:19.436187   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:20.338787   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.344075   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.354357   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.374759   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.415076   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.495449   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.655901   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:20.976541   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:21.617544   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:22.897929   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:25.458355   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:26.457723   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:18:29.677416   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:18:30.579044   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:40.819251   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:18:50.158397   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:19:01.299697   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:19:07.417919   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-856422 --alsologtostderr -v=3: exit status 82 (2m0.523709596s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-856422"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:17:53.005856   76817 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:17:53.006008   76817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:17:53.006022   76817 out.go:304] Setting ErrFile to fd 2...
	I0422 18:17:53.006030   76817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:17:53.006232   76817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:17:53.006463   76817 out.go:298] Setting JSON to false
	I0422 18:17:53.006541   76817 mustload.go:65] Loading cluster: default-k8s-diff-port-856422
	I0422 18:17:53.006873   76817 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:17:53.006937   76817 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:17:53.007097   76817 mustload.go:65] Loading cluster: default-k8s-diff-port-856422
	I0422 18:17:53.007217   76817 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:17:53.007251   76817 stop.go:39] StopHost: default-k8s-diff-port-856422
	I0422 18:17:53.007644   76817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:17:53.007681   76817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:17:53.022229   76817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35563
	I0422 18:17:53.022692   76817 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:17:53.023233   76817 main.go:141] libmachine: Using API Version  1
	I0422 18:17:53.023260   76817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:17:53.023631   76817 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:17:53.026347   76817 out.go:177] * Stopping node "default-k8s-diff-port-856422"  ...
	I0422 18:17:53.028159   76817 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0422 18:17:53.028189   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:17:53.028429   76817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0422 18:17:53.028457   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:17:53.031183   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:17:53.031623   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:17:53.031649   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:17:53.031758   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:17:53.031950   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:17:53.032114   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:17:53.032247   76817 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:17:53.127233   76817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0422 18:17:53.193070   76817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0422 18:17:53.267068   76817 main.go:141] libmachine: Stopping "default-k8s-diff-port-856422"...
	I0422 18:17:53.267140   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:17:53.268836   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Stop
	I0422 18:17:53.272394   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 0/120
	I0422 18:17:54.273751   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 1/120
	I0422 18:17:55.275460   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 2/120
	I0422 18:17:56.276943   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 3/120
	I0422 18:17:57.278528   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 4/120
	I0422 18:17:58.280613   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 5/120
	I0422 18:17:59.282018   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 6/120
	I0422 18:18:00.283446   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 7/120
	I0422 18:18:01.284853   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 8/120
	I0422 18:18:02.286306   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 9/120
	I0422 18:18:03.288148   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 10/120
	I0422 18:18:04.289883   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 11/120
	I0422 18:18:05.291354   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 12/120
	I0422 18:18:06.293228   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 13/120
	I0422 18:18:07.294624   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 14/120
	I0422 18:18:08.296696   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 15/120
	I0422 18:18:09.298034   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 16/120
	I0422 18:18:10.299507   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 17/120
	I0422 18:18:11.300977   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 18/120
	I0422 18:18:12.302490   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 19/120
	I0422 18:18:13.303971   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 20/120
	I0422 18:18:14.305664   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 21/120
	I0422 18:18:15.307319   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 22/120
	I0422 18:18:16.308774   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 23/120
	I0422 18:18:17.310218   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 24/120
	I0422 18:18:18.312252   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 25/120
	I0422 18:18:19.313737   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 26/120
	I0422 18:18:20.315432   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 27/120
	I0422 18:18:21.316974   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 28/120
	I0422 18:18:22.318469   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 29/120
	I0422 18:18:23.320008   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 30/120
	I0422 18:18:24.321321   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 31/120
	I0422 18:18:25.322924   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 32/120
	I0422 18:18:26.324325   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 33/120
	I0422 18:18:27.326100   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 34/120
	I0422 18:18:28.328355   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 35/120
	I0422 18:18:29.329799   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 36/120
	I0422 18:18:30.331279   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 37/120
	I0422 18:18:31.332910   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 38/120
	I0422 18:18:32.334347   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 39/120
	I0422 18:18:33.335957   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 40/120
	I0422 18:18:34.337396   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 41/120
	I0422 18:18:35.339018   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 42/120
	I0422 18:18:36.340863   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 43/120
	I0422 18:18:37.342400   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 44/120
	I0422 18:18:38.344521   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 45/120
	I0422 18:18:39.346013   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 46/120
	I0422 18:18:40.347524   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 47/120
	I0422 18:18:41.349144   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 48/120
	I0422 18:18:42.350871   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 49/120
	I0422 18:18:43.352318   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 50/120
	I0422 18:18:44.353653   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 51/120
	I0422 18:18:45.355244   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 52/120
	I0422 18:18:46.356689   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 53/120
	I0422 18:18:47.358061   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 54/120
	I0422 18:18:48.360117   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 55/120
	I0422 18:18:49.361607   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 56/120
	I0422 18:18:50.362923   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 57/120
	I0422 18:18:51.364310   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 58/120
	I0422 18:18:52.365935   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 59/120
	I0422 18:18:53.367404   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 60/120
	I0422 18:18:54.369015   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 61/120
	I0422 18:18:55.370526   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 62/120
	I0422 18:18:56.372097   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 63/120
	I0422 18:18:57.373494   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 64/120
	I0422 18:18:58.375694   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 65/120
	I0422 18:18:59.377115   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 66/120
	I0422 18:19:00.378532   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 67/120
	I0422 18:19:01.380052   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 68/120
	I0422 18:19:02.381648   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 69/120
	I0422 18:19:03.384165   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 70/120
	I0422 18:19:04.385339   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 71/120
	I0422 18:19:05.386827   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 72/120
	I0422 18:19:06.388088   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 73/120
	I0422 18:19:07.389674   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 74/120
	I0422 18:19:08.391877   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 75/120
	I0422 18:19:09.393191   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 76/120
	I0422 18:19:10.394646   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 77/120
	I0422 18:19:11.396231   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 78/120
	I0422 18:19:12.397653   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 79/120
	I0422 18:19:13.399063   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 80/120
	I0422 18:19:14.400513   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 81/120
	I0422 18:19:15.402289   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 82/120
	I0422 18:19:16.403821   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 83/120
	I0422 18:19:17.405970   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 84/120
	I0422 18:19:18.408315   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 85/120
	I0422 18:19:19.409731   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 86/120
	I0422 18:19:20.411750   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 87/120
	I0422 18:19:21.413171   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 88/120
	I0422 18:19:22.414710   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 89/120
	I0422 18:19:23.416285   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 90/120
	I0422 18:19:24.417904   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 91/120
	I0422 18:19:25.419386   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 92/120
	I0422 18:19:26.420548   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 93/120
	I0422 18:19:27.422122   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 94/120
	I0422 18:19:28.424403   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 95/120
	I0422 18:19:29.426002   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 96/120
	I0422 18:19:30.427907   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 97/120
	I0422 18:19:31.429476   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 98/120
	I0422 18:19:32.431159   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 99/120
	I0422 18:19:33.432694   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 100/120
	I0422 18:19:34.434249   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 101/120
	I0422 18:19:35.435924   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 102/120
	I0422 18:19:36.437663   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 103/120
	I0422 18:19:37.439418   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 104/120
	I0422 18:19:38.441822   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 105/120
	I0422 18:19:39.443311   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 106/120
	I0422 18:19:40.444932   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 107/120
	I0422 18:19:41.446278   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 108/120
	I0422 18:19:42.447707   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 109/120
	I0422 18:19:43.449866   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 110/120
	I0422 18:19:44.451293   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 111/120
	I0422 18:19:45.452733   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 112/120
	I0422 18:19:46.454410   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 113/120
	I0422 18:19:47.455933   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 114/120
	I0422 18:19:48.458019   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 115/120
	I0422 18:19:49.459533   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 116/120
	I0422 18:19:50.460809   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 117/120
	I0422 18:19:51.462300   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 118/120
	I0422 18:19:52.463700   76817 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for machine to stop 119/120
	I0422 18:19:53.464292   76817 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0422 18:19:53.464370   76817 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0422 18:19:53.466884   76817 out.go:177] 
	W0422 18:19:53.468707   76817 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0422 18:19:53.468732   76817 out.go:239] * 
	* 
	W0422 18:19:53.471495   76817 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:19:53.473004   76817 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-856422 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
E0422 18:19:55.168987   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:57.217341   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.222618   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.232904   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.253230   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.293618   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.373977   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.534494   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:57.648029   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:57.855530   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:58.496633   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:19:59.777340   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:20:00.289236   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:20:02.338126   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:20:07.458970   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:20:07.902112   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 18:20:10.529972   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422: exit status 3 (18.548724904s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:20:12.023469   77684 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host
	E0422 18:20:12.023496   77684 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-856422" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991
E0422 18:19:31.119172   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991: exit status 3 (3.168192734s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:31.415588   77233 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host
	E0422 18:19:31.415629   77233 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-407991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-407991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153345986s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-407991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991
E0422 18:19:37.803048   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:38.444141   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991: exit status 3 (3.062024637s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:40.631494   77335 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host
	E0422 18:19:40.631513   77335 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.164:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-407991" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377
E0422 18:19:37.165392   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:37.170705   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:37.180994   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:37.201287   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:37.241797   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:37.322270   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:19:37.482694   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377: exit status 3 (3.167707718s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:39.095485   77304 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0422 18:19:39.095504   77304 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-782377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0422 18:19:39.724940   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-782377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153318351s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-782377 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377
E0422 18:19:47.406956   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377: exit status 3 (3.062659914s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:48.311551   77588 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host
	E0422 18:19:48.311576   77588 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.114:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-782377" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-367072 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-367072 create -f testdata/busybox.yaml: exit status 1 (43.360464ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-367072" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-367072 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 6 (232.548195ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:43.137367   77483 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-367072" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 6 (231.912265ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:19:43.368366   77512 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-367072" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (118.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-367072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-367072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m58.425541042s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-367072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-367072 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-367072 describe deploy/metrics-server -n kube-system: exit status 1 (43.565656ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-367072" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-367072 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 6 (234.378078ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:21:42.072145   78242 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-367072" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (118.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422: exit status 3 (3.199987146s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:20:15.223513   77782 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host
	E0422 18:20:15.223532   77782 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-856422 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0422 18:20:17.699568   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:20:18.128609   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-856422 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153138538s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-856422 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422: exit status 3 (3.063378215s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 18:20:24.439568   77881 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host
	E0422 18:20:24.439589   77881 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.206:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-856422" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (705.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-367072 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0422 18:22:21.010596   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:22:25.464845   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:22:33.891481   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:22:41.062473   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:22:45.496708   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:23:09.193649   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:23:13.178859   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:23:20.338823   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:23:36.881016   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:23:47.385124   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:23:48.022319   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:24:37.165538   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:24:50.048271   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:24:57.217152   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:25:04.850754   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:25:07.901973   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 18:25:17.732032   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:25:24.902845   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:26:03.541277   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:26:19.002469   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 18:26:31.226027   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:27:42.053264   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 18:27:45.496613   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:28:09.194453   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:28:20.339050   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:29:37.165466   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:29:50.048257   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:29:57.216876   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-367072 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m41.858573878s)

                                                
                                                
-- stdout --
	* [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	* 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	* 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-367072 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (262.437529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-367072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-367072 logs -n 25: (1.525983472s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo cat                              | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:21:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:21:45.719482   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:48.791433   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:54.871446   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:57.943441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:04.023441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:07.095417   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:13.175430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:16.247522   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:22.327414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:25.399441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:31.479440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:34.551439   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:40.631451   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:43.703447   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:49.783400   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:52.855484   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:58.935464   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:02.007435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:08.087442   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:11.159452   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:17.239435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:20.311430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:26.391420   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:29.463418   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:35.543443   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:38.615421   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:44.695419   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:47.767475   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:53.847471   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:56.919436   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:02.999404   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:06.071458   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:12.151440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:15.223414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:18.227587   77634 start.go:364] duration metric: took 4m29.759611802s to acquireMachinesLock for "embed-certs-782377"
	I0422 18:24:18.227650   77634 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:18.227661   77634 fix.go:54] fixHost starting: 
	I0422 18:24:18.227979   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:18.228013   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:18.243001   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 18:24:18.243415   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:18.243835   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:24:18.243850   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:18.244219   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:18.244384   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:18.244534   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:24:18.246202   77634 fix.go:112] recreateIfNeeded on embed-certs-782377: state=Stopped err=<nil>
	I0422 18:24:18.246228   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	W0422 18:24:18.246399   77634 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:18.248257   77634 out.go:177] * Restarting existing kvm2 VM for "embed-certs-782377" ...
	I0422 18:24:18.249777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Start
	I0422 18:24:18.249966   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring networks are active...
	I0422 18:24:18.250666   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network default is active
	I0422 18:24:18.251036   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network mk-embed-certs-782377 is active
	I0422 18:24:18.251499   77634 main.go:141] libmachine: (embed-certs-782377) Getting domain xml...
	I0422 18:24:18.252150   77634 main.go:141] libmachine: (embed-certs-782377) Creating domain...
	I0422 18:24:18.225125   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:18.225168   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225565   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:24:18.225593   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225781   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:24:18.227460   77400 machine.go:97] duration metric: took 4m37.410379606s to provisionDockerMachine
	I0422 18:24:18.227495   77400 fix.go:56] duration metric: took 4m37.433636251s for fixHost
	I0422 18:24:18.227499   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 4m37.433656207s
	W0422 18:24:18.227517   77400 start.go:713] error starting host: provision: host is not running
	W0422 18:24:18.227584   77400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0422 18:24:18.227593   77400 start.go:728] Will try again in 5 seconds ...
	I0422 18:24:19.442937   77634 main.go:141] libmachine: (embed-certs-782377) Waiting to get IP...
	I0422 18:24:19.444048   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.444425   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.444484   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.444392   78906 retry.go:31] will retry after 283.008432ms: waiting for machine to come up
	I0422 18:24:19.729076   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.729457   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.729493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.729411   78906 retry.go:31] will retry after 252.047573ms: waiting for machine to come up
	I0422 18:24:19.983011   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.983417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.983442   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.983397   78906 retry.go:31] will retry after 300.528755ms: waiting for machine to come up
	I0422 18:24:20.286039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.286467   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.286500   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.286425   78906 retry.go:31] will retry after 426.555496ms: waiting for machine to come up
	I0422 18:24:20.715191   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.715601   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.715638   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.715525   78906 retry.go:31] will retry after 533.433633ms: waiting for machine to come up
	I0422 18:24:21.250151   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:21.250702   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:21.250732   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:21.250646   78906 retry.go:31] will retry after 854.033547ms: waiting for machine to come up
	I0422 18:24:22.106728   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.107083   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.107109   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.107036   78906 retry.go:31] will retry after 761.233698ms: waiting for machine to come up
	I0422 18:24:22.870007   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.870408   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.870435   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.870364   78906 retry.go:31] will retry after 1.121568589s: waiting for machine to come up
	I0422 18:24:23.229316   77400 start.go:360] acquireMachinesLock for no-preload-407991: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:23.993127   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:23.993600   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:23.993623   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:23.993535   78906 retry.go:31] will retry after 1.525222377s: waiting for machine to come up
	I0422 18:24:25.520203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:25.520584   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:25.520609   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:25.520557   78906 retry.go:31] will retry after 1.618927059s: waiting for machine to come up
	I0422 18:24:27.140862   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:27.141363   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:27.141391   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:27.141315   78906 retry.go:31] will retry after 1.828869827s: waiting for machine to come up
	I0422 18:24:28.972053   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:28.972472   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:28.972508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:28.972438   78906 retry.go:31] will retry after 2.456935091s: waiting for machine to come up
	I0422 18:24:31.430825   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:31.431208   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:31.431266   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:31.431181   78906 retry.go:31] will retry after 3.415431602s: waiting for machine to come up
	I0422 18:24:36.144008   77929 start.go:364] duration metric: took 4m11.537292071s to acquireMachinesLock for "default-k8s-diff-port-856422"
	I0422 18:24:36.144073   77929 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:36.144079   77929 fix.go:54] fixHost starting: 
	I0422 18:24:36.144413   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:36.144450   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:36.161253   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0422 18:24:36.161715   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:36.162147   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:24:36.162166   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:36.162536   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:36.162743   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:36.162914   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:24:36.164366   77929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-856422: state=Stopped err=<nil>
	I0422 18:24:36.164397   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	W0422 18:24:36.164563   77929 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:36.166915   77929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-856422" ...
	I0422 18:24:34.847819   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848316   77634 main.go:141] libmachine: (embed-certs-782377) Found IP for machine: 192.168.50.114
	I0422 18:24:34.848339   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has current primary IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848357   77634 main.go:141] libmachine: (embed-certs-782377) Reserving static IP address...
	I0422 18:24:34.848741   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.848769   77634 main.go:141] libmachine: (embed-certs-782377) DBG | skip adding static IP to network mk-embed-certs-782377 - found existing host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"}
	I0422 18:24:34.848782   77634 main.go:141] libmachine: (embed-certs-782377) Reserved static IP address: 192.168.50.114
	I0422 18:24:34.848801   77634 main.go:141] libmachine: (embed-certs-782377) Waiting for SSH to be available...
	I0422 18:24:34.848808   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Getting to WaitForSSH function...
	I0422 18:24:34.850829   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851167   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.851199   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851332   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH client type: external
	I0422 18:24:34.851352   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa (-rw-------)
	I0422 18:24:34.851383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:34.851402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | About to run SSH command:
	I0422 18:24:34.851417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | exit 0
	I0422 18:24:34.975383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:34.975812   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetConfigRaw
	I0422 18:24:34.976602   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:34.979578   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.979959   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.979992   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.980238   77634 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:24:34.980472   77634 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:34.980497   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:34.980777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:34.983493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.983958   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.983999   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.984175   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:34.984372   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984710   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:34.984894   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:34.985074   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:34.985086   77634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:35.099838   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:35.099873   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100144   77634 buildroot.go:166] provisioning hostname "embed-certs-782377"
	I0422 18:24:35.100169   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100381   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.103203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103589   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.103618   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103754   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.103930   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104116   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104262   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.104446   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.104696   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.104720   77634 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-782377 && echo "embed-certs-782377" | sudo tee /etc/hostname
	I0422 18:24:35.223934   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-782377
	
	I0422 18:24:35.223962   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.227033   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227376   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.227413   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.227779   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.227976   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.228140   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.228334   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.228492   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.228508   77634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-782377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-782377/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-782377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:35.346513   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:35.346545   77634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:35.346561   77634 buildroot.go:174] setting up certificates
	I0422 18:24:35.346571   77634 provision.go:84] configureAuth start
	I0422 18:24:35.346598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.346898   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:35.349820   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350164   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.350192   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350301   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.352921   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353288   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.353314   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353488   77634 provision.go:143] copyHostCerts
	I0422 18:24:35.353543   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:35.353552   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:35.353619   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:35.353717   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:35.353725   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:35.353749   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:35.353801   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:35.353810   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:35.353831   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:35.353894   77634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.embed-certs-782377 san=[127.0.0.1 192.168.50.114 embed-certs-782377 localhost minikube]
	I0422 18:24:35.463676   77634 provision.go:177] copyRemoteCerts
	I0422 18:24:35.463733   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:35.463758   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.466567   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.467039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.467415   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.467605   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.467740   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.549947   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:35.576364   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:24:35.601539   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:35.625959   77634 provision.go:87] duration metric: took 279.37435ms to configureAuth
	I0422 18:24:35.625992   77634 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:35.626171   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:35.626235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.629095   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.629533   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629707   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.629934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630077   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630238   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.630365   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.630546   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.630563   77634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:35.906862   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:35.906892   77634 machine.go:97] duration metric: took 926.403466ms to provisionDockerMachine
	I0422 18:24:35.906905   77634 start.go:293] postStartSetup for "embed-certs-782377" (driver="kvm2")
	I0422 18:24:35.906916   77634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:35.906934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:35.907241   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:35.907277   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.910029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.910438   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910599   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.910782   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.910993   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.911168   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.994189   77634 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:35.998376   77634 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:35.998395   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:35.998468   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:35.998545   77634 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:35.998650   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:36.008268   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:36.034031   77634 start.go:296] duration metric: took 127.110389ms for postStartSetup
	I0422 18:24:36.034081   77634 fix.go:56] duration metric: took 17.806421597s for fixHost
	I0422 18:24:36.034100   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.036964   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037357   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.037380   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.037775   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038051   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.038403   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:36.038568   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:36.038579   77634 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:36.143878   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810276.108619822
	
	I0422 18:24:36.143903   77634 fix.go:216] guest clock: 1713810276.108619822
	I0422 18:24:36.143911   77634 fix.go:229] Guest: 2024-04-22 18:24:36.108619822 +0000 UTC Remote: 2024-04-22 18:24:36.034084746 +0000 UTC m=+287.715620683 (delta=74.535076ms)
	I0422 18:24:36.143936   77634 fix.go:200] guest clock delta is within tolerance: 74.535076ms
	I0422 18:24:36.143941   77634 start.go:83] releasing machines lock for "embed-certs-782377", held for 17.916313877s
	I0422 18:24:36.143966   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.144235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:36.146867   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147228   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.147257   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147431   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.147883   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148066   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148171   77634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:36.148218   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.148377   77634 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:36.148403   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.150838   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151150   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151176   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151268   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151296   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.151466   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.151628   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.151671   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151695   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151747   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.151880   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.152055   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.152209   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.152350   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.229109   77634 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:36.266621   77634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:36.421344   77634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:36.427814   77634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:36.427892   77634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:36.448157   77634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:36.448192   77634 start.go:494] detecting cgroup driver to use...
	I0422 18:24:36.448255   77634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:36.468930   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:36.485780   77634 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:36.485856   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:36.502182   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:36.521179   77634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:36.636244   77634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:36.783292   77634 docker.go:233] disabling docker service ...
	I0422 18:24:36.783366   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:36.803014   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:36.817938   77634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:36.957954   77634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:37.085750   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:37.101054   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:37.123504   77634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:37.123555   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.134422   77634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:37.134491   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.145961   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.157192   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.170117   77634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:37.188656   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.205792   77634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.225739   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.236719   77634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:37.246351   77634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:37.246401   77634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:37.261144   77634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:37.271464   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:37.395686   77634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:37.534079   77634 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:37.534156   77634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:37.539212   77634 start.go:562] Will wait 60s for crictl version
	I0422 18:24:37.539285   77634 ssh_runner.go:195] Run: which crictl
	I0422 18:24:37.543239   77634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:37.581460   77634 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:37.581562   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.611743   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.645811   77634 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:37.647247   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:37.650321   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.650811   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:37.650841   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.651055   77634 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:37.655865   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:37.673617   77634 kubeadm.go:877] updating cluster {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:37.673732   77634 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:37.673785   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:37.718534   77634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:37.718609   77634 ssh_runner.go:195] Run: which lz4
	I0422 18:24:37.723369   77634 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:37.728270   77634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:37.728303   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:36.168344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Start
	I0422 18:24:36.168494   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring networks are active...
	I0422 18:24:36.169419   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network default is active
	I0422 18:24:36.169811   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network mk-default-k8s-diff-port-856422 is active
	I0422 18:24:36.170341   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Getting domain xml...
	I0422 18:24:36.171019   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Creating domain...
	I0422 18:24:37.407148   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting to get IP...
	I0422 18:24:37.408083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408430   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408509   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.408416   79040 retry.go:31] will retry after 267.855158ms: waiting for machine to come up
	I0422 18:24:37.677765   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678134   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678168   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.678084   79040 retry.go:31] will retry after 267.61504ms: waiting for machine to come up
	I0422 18:24:37.947737   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948250   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.948216   79040 retry.go:31] will retry after 351.088664ms: waiting for machine to come up
	I0422 18:24:38.300548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301057   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301090   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.301011   79040 retry.go:31] will retry after 560.164848ms: waiting for machine to come up
	I0422 18:24:38.862557   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863114   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.863075   79040 retry.go:31] will retry after 590.286684ms: waiting for machine to come up
	I0422 18:24:39.454925   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455483   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455510   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:39.455428   79040 retry.go:31] will retry after 870.474888ms: waiting for machine to come up
	I0422 18:24:39.338447   77634 crio.go:462] duration metric: took 1.615205556s to copy over tarball
	I0422 18:24:39.338545   77634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:41.640474   77634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301883484s)
	I0422 18:24:41.640514   77634 crio.go:469] duration metric: took 2.302038123s to extract the tarball
	I0422 18:24:41.640524   77634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:24:41.680325   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:41.724755   77634 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:24:41.724777   77634 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:24:41.724785   77634 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.30.0 crio true true} ...
	I0422 18:24:41.724887   77634 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-782377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:24:41.724964   77634 ssh_runner.go:195] Run: crio config
	I0422 18:24:41.772680   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:41.772704   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:41.772715   77634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:24:41.772733   77634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-782377 NodeName:embed-certs-782377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:24:41.772898   77634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-782377"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:24:41.772964   77634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:24:41.783492   77634 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:24:41.783575   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:24:41.793500   77634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0422 18:24:41.810415   77634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:24:41.827504   77634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0422 18:24:41.845704   77634 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0422 18:24:41.849728   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:41.862798   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:41.998260   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:24:42.018779   77634 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377 for IP: 192.168.50.114
	I0422 18:24:42.018801   77634 certs.go:194] generating shared ca certs ...
	I0422 18:24:42.018820   77634 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:24:42.018977   77634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:24:42.019034   77634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:24:42.019048   77634 certs.go:256] generating profile certs ...
	I0422 18:24:42.019146   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/client.key
	I0422 18:24:42.019218   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key.d804c20e
	I0422 18:24:42.019298   77634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key
	I0422 18:24:42.019455   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:24:42.019493   77634 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:24:42.019509   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:24:42.019539   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:24:42.019571   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:24:42.019606   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:24:42.019665   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:42.020460   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:24:42.065297   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:24:42.098581   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:24:42.139751   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:24:42.169770   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:24:42.199958   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:24:42.229298   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:24:42.254517   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:24:42.279390   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:24:42.303872   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:24:42.329704   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:24:42.355108   77634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:24:42.372684   77634 ssh_runner.go:195] Run: openssl version
	I0422 18:24:42.378631   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:24:42.389709   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394492   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394552   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.400346   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:24:42.411335   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:24:42.422568   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427213   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427278   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.433277   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:24:42.444618   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:24:42.455793   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460681   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460739   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.466785   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:24:42.485401   77634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:24:42.491205   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:24:42.498635   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:24:42.510577   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:24:42.517596   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:24:42.524413   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:24:42.530872   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:24:42.537199   77634 kubeadm.go:391] StartCluster: {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:24:42.537319   77634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:24:42.537379   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.579863   77634 cri.go:89] found id: ""
	I0422 18:24:42.579944   77634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:24:42.590756   77634 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:24:42.590781   77634 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:24:42.590788   77634 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:24:42.590844   77634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:24:42.601517   77634 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:24:42.603120   77634 kubeconfig.go:125] found "embed-certs-782377" server: "https://192.168.50.114:8443"
	I0422 18:24:42.606189   77634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:24:42.616881   77634 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0422 18:24:42.616911   77634 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:24:42.616922   77634 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:24:42.616970   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.656829   77634 cri.go:89] found id: ""
	I0422 18:24:42.656923   77634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:24:42.675575   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:24:42.686408   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:24:42.686431   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:24:42.686484   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:24:42.697303   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:24:42.697391   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:24:42.707693   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:24:42.717836   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:24:42.717932   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:24:42.729952   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.740902   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:24:42.740980   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.751946   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:24:42.761758   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:24:42.761830   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:24:42.772699   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:24:42.783018   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:42.891737   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:40.327325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327782   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:40.327726   79040 retry.go:31] will retry after 926.321969ms: waiting for machine to come up
	I0422 18:24:41.255601   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256117   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:41.256072   79040 retry.go:31] will retry after 928.33371ms: waiting for machine to come up
	I0422 18:24:42.186290   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186798   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186826   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:42.186762   79040 retry.go:31] will retry after 1.708117553s: waiting for machine to come up
	I0422 18:24:43.896236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:43.896597   79040 retry.go:31] will retry after 1.720003793s: waiting for machine to come up
	I0422 18:24:44.055395   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163622709s)
	I0422 18:24:44.055429   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.278840   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.351743   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.460115   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:24:44.460202   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:44.960631   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.460588   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.478048   77634 api_server.go:72] duration metric: took 1.017932232s to wait for apiserver process to appear ...
	I0422 18:24:45.478082   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:24:45.478104   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:45.478702   77634 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0422 18:24:45.978527   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.247298   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:24:48.247334   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:24:48.247351   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.295953   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.296005   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.478899   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.488884   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.488920   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.978472   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.992521   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.992552   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:49.479179   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:49.485588   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:24:49.493015   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:24:49.493055   77634 api_server.go:131] duration metric: took 4.01496465s to wait for apiserver health ...
	I0422 18:24:49.493065   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:49.493074   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:49.494997   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:24:45.618240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618714   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618744   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:45.618673   79040 retry.go:31] will retry after 2.396679945s: waiting for machine to come up
	I0422 18:24:48.016812   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017231   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017258   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:48.017197   79040 retry.go:31] will retry after 2.304959564s: waiting for machine to come up
	I0422 18:24:49.496476   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:24:49.516525   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:24:49.541103   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:24:49.552224   77634 system_pods.go:59] 8 kube-system pods found
	I0422 18:24:49.552263   77634 system_pods.go:61] "coredns-7db6d8ff4d-lxcv2" [137ad3db-8bc5-4b7f-8eb0-12a278eba41c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:24:49.552273   77634 system_pods.go:61] "etcd-embed-certs-782377" [85322e31-1ad6-4239-8086-f2a465a28d8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:24:49.552287   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [e791d7d4-a94d-4cce-a50d-4e569350f210] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:24:49.552307   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [cbcc2e7f-7b3a-435b-97d5-5b69b7e399c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:24:49.552317   77634 system_pods.go:61] "kube-proxy-r4249" [7ffb3b8f-53d8-45df-8426-74f0ffb0d20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 18:24:49.552327   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [9568040b-3eca-403e-b078-d6f2071e70c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:24:49.552335   77634 system_pods.go:61] "metrics-server-569cc877fc-d8s5p" [3bcda1df-02f7-4405-95c7-4d8559a0138c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:24:49.552342   77634 system_pods.go:61] "storage-provisioner" [c196d779-346a-4e3f-b1c3-dde4292df017] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:24:49.552351   77634 system_pods.go:74] duration metric: took 11.221599ms to wait for pod list to return data ...
	I0422 18:24:49.552373   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:24:49.556086   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:24:49.556130   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:24:49.556142   77634 node_conditions.go:105] duration metric: took 3.764067ms to run NodePressure ...
	I0422 18:24:49.556161   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:49.852023   77634 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856866   77634 kubeadm.go:733] kubelet initialised
	I0422 18:24:49.856894   77634 kubeadm.go:734] duration metric: took 4.83996ms waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856904   77634 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:24:49.863808   77634 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.868817   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868840   77634 pod_ready.go:81] duration metric: took 5.001181ms for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.868849   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868855   77634 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.873591   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873612   77634 pod_ready.go:81] duration metric: took 4.750292ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.873621   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873627   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.878471   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878494   77634 pod_ready.go:81] duration metric: took 4.859998ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.878503   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878510   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.945869   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945909   77634 pod_ready.go:81] duration metric: took 67.385628ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.945923   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945932   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345633   77634 pod_ready.go:92] pod "kube-proxy-r4249" in "kube-system" namespace has status "Ready":"True"
	I0422 18:24:50.345655   77634 pod_ready.go:81] duration metric: took 399.713725ms for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345666   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:52.352988   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:50.324396   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324920   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324953   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:50.324894   79040 retry.go:31] will retry after 4.018790507s: waiting for machine to come up
	I0422 18:24:54.347584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348046   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Found IP for machine: 192.168.61.206
	I0422 18:24:54.348081   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has current primary IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348094   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserving static IP address...
	I0422 18:24:54.348535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserved static IP address: 192.168.61.206
	I0422 18:24:54.348560   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for SSH to be available...
	I0422 18:24:54.348584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.348624   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | skip adding static IP to network mk-default-k8s-diff-port-856422 - found existing host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"}
	I0422 18:24:54.348640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Getting to WaitForSSH function...
	I0422 18:24:54.351069   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351570   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.351608   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH client type: external
	I0422 18:24:54.351758   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa (-rw-------)
	I0422 18:24:54.351793   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:54.351810   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | About to run SSH command:
	I0422 18:24:54.351834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | exit 0
	I0422 18:24:54.479277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:54.479674   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetConfigRaw
	I0422 18:24:54.480350   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.483089   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.483498   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483801   77929 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:24:54.484031   77929 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:54.484051   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:54.484272   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.486449   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.486857   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486992   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.487178   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487470   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.487635   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.487825   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.487838   77929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:54.603732   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:54.603759   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.603993   77929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-856422"
	I0422 18:24:54.604017   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.604280   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.606938   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607302   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.607331   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607524   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.607693   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.607856   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.608002   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.608174   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.608381   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.608398   77929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-856422 && echo "default-k8s-diff-port-856422" | sudo tee /etc/hostname
	I0422 18:24:54.734622   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-856422
	
	I0422 18:24:54.734646   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.737804   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738109   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.738141   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.738495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738773   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.738950   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.739157   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.739176   77929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-856422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-856422/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-856422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:54.864646   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:54.864679   77929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:54.864732   77929 buildroot.go:174] setting up certificates
	I0422 18:24:54.864745   77929 provision.go:84] configureAuth start
	I0422 18:24:54.864764   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.865059   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.868205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868626   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.868666   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868868   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.871736   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872118   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.872147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872275   77929 provision.go:143] copyHostCerts
	I0422 18:24:54.872340   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:54.872353   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:54.872424   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:54.872545   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:54.872557   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:54.872598   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:54.872676   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:54.872688   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:54.872718   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:54.872794   77929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-856422 san=[127.0.0.1 192.168.61.206 default-k8s-diff-port-856422 localhost minikube]
	I0422 18:24:55.091765   77929 provision.go:177] copyRemoteCerts
	I0422 18:24:55.091820   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:55.091848   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.094572   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.094939   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.094970   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.095209   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.095501   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.095767   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.095958   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.192243   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:55.223313   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 18:24:55.250149   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:55.279442   77929 provision.go:87] duration metric: took 414.679508ms to configureAuth
	I0422 18:24:55.279474   77929 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:55.280056   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:55.280125   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.282806   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.283237   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283405   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.283636   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283803   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283941   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.284109   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.284276   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.284294   77929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:55.565199   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:55.565225   77929 machine.go:97] duration metric: took 1.081180365s to provisionDockerMachine
	I0422 18:24:55.565239   77929 start.go:293] postStartSetup for "default-k8s-diff-port-856422" (driver="kvm2")
	I0422 18:24:55.565282   77929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:55.565312   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.565649   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:55.565682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.568211   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.568614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568809   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.568994   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.569182   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.569352   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.654461   77929 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:55.658992   77929 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:55.659016   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:55.659091   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:55.659199   77929 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:55.659309   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:55.669183   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:55.694953   77929 start.go:296] duration metric: took 129.698973ms for postStartSetup
	I0422 18:24:55.694998   77929 fix.go:56] duration metric: took 19.550918724s for fixHost
	I0422 18:24:55.695021   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.697596   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.697926   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.697958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.698133   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.698325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698479   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698579   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.698680   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.698897   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.698914   77929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:55.812106   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810295.778892948
	
	I0422 18:24:55.812132   77929 fix.go:216] guest clock: 1713810295.778892948
	I0422 18:24:55.812143   77929 fix.go:229] Guest: 2024-04-22 18:24:55.778892948 +0000 UTC Remote: 2024-04-22 18:24:55.69500303 +0000 UTC m=+271.245786903 (delta=83.889918ms)
	I0422 18:24:55.812168   77929 fix.go:200] guest clock delta is within tolerance: 83.889918ms
	I0422 18:24:55.812176   77929 start.go:83] releasing machines lock for "default-k8s-diff-port-856422", held for 19.668119564s
	I0422 18:24:55.812213   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.812500   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:55.815404   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.815786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.815828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.816036   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816526   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816698   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816781   77929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:55.816823   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.817092   77929 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:55.817116   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.819495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819710   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819931   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.819958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820045   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.820181   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820217   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820362   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820366   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820631   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.820716   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820845   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.904810   77929 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:55.937093   77929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:56.089389   77929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:56.096144   77929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:56.096208   77929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:56.118194   77929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:56.118224   77929 start.go:494] detecting cgroup driver to use...
	I0422 18:24:56.118292   77929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:56.134918   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:56.154107   77929 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:56.154180   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:56.168971   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:56.188793   77929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:56.310223   77929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:56.492316   77929 docker.go:233] disabling docker service ...
	I0422 18:24:56.492430   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:56.515169   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:56.529734   77929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:56.670628   77929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:56.810823   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:56.826785   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:56.847682   77929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:56.847741   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.860499   77929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:56.860576   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.872086   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.883347   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.901596   77929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:56.916912   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.928121   77929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.947335   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.958431   77929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:56.968077   77929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:56.968131   77929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:56.982135   77929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:56.991801   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:57.125635   77929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:57.263889   77929 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:57.263973   77929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:57.269573   77929 start.go:562] Will wait 60s for crictl version
	I0422 18:24:57.269627   77929 ssh_runner.go:195] Run: which crictl
	I0422 18:24:57.273613   77929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:57.314357   77929 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:57.314463   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.345062   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.380868   77929 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:54.353338   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:56.853757   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:57.382284   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:57.385215   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:57.385655   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385889   77929 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:57.390482   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:57.405644   77929 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:57.405766   77929 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:57.405868   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:57.452528   77929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:57.452604   77929 ssh_runner.go:195] Run: which lz4
	I0422 18:24:57.456903   77929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:57.461373   77929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:57.461411   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:59.060426   77929 crio.go:462] duration metric: took 1.603560712s to copy over tarball
	I0422 18:24:59.060532   77929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:58.854112   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:00.856147   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:01.563849   77929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503290941s)
	I0422 18:25:01.563881   77929 crio.go:469] duration metric: took 2.503413712s to extract the tarball
	I0422 18:25:01.563891   77929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:01.603330   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:01.649885   77929 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:25:01.649909   77929 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:25:01.649916   77929 kubeadm.go:928] updating node { 192.168.61.206 8444 v1.30.0 crio true true} ...
	I0422 18:25:01.650053   77929 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-856422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:01.650143   77929 ssh_runner.go:195] Run: crio config
	I0422 18:25:01.698892   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:01.698915   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:01.698929   77929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:01.698948   77929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.206 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-856422 NodeName:default-k8s-diff-port-856422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:01.699075   77929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.206
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-856422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:01.699150   77929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:01.709830   77929 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:01.709903   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:01.720447   77929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0422 18:25:01.738745   77929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:01.756420   77929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0422 18:25:01.775364   77929 ssh_runner.go:195] Run: grep 192.168.61.206	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:01.779476   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:01.792860   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:01.920607   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:01.939637   77929 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422 for IP: 192.168.61.206
	I0422 18:25:01.939658   77929 certs.go:194] generating shared ca certs ...
	I0422 18:25:01.939675   77929 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:01.939858   77929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:01.939911   77929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:01.939922   77929 certs.go:256] generating profile certs ...
	I0422 18:25:01.940026   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/client.key
	I0422 18:25:01.940105   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key.e8400874
	I0422 18:25:01.940170   77929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key
	I0422 18:25:01.940320   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:01.940386   77929 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:01.940400   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:01.940437   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:01.940474   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:01.940506   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:01.940603   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:01.941408   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:01.981392   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:02.020335   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:02.057221   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:02.088571   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 18:25:02.123716   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:02.153926   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:02.183499   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:02.212438   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:02.238650   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:02.265786   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:02.295001   77929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:02.315343   77929 ssh_runner.go:195] Run: openssl version
	I0422 18:25:02.322001   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:02.334785   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340619   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340686   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.348942   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:02.364960   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:02.381460   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386720   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386794   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.392894   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:02.404951   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:02.417334   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423503   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423573   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.430512   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:02.444132   77929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:02.449749   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:02.456667   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:02.463700   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:02.470474   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:02.477324   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:02.483900   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:02.490614   77929 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:02.490719   77929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:02.490768   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.538766   77929 cri.go:89] found id: ""
	I0422 18:25:02.538849   77929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:02.549686   77929 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:02.549711   77929 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:02.549717   77929 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:02.549794   77929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:02.560594   77929 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:02.561584   77929 kubeconfig.go:125] found "default-k8s-diff-port-856422" server: "https://192.168.61.206:8444"
	I0422 18:25:02.563656   77929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:02.575462   77929 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.206
	I0422 18:25:02.575507   77929 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:02.575522   77929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:02.575606   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.628012   77929 cri.go:89] found id: ""
	I0422 18:25:02.628080   77929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:02.645405   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:02.656723   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:02.656751   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:02.656814   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:25:02.667202   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:02.667269   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:02.678303   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:25:02.688600   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:02.688690   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:02.699963   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.710329   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:02.710393   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.721188   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:25:02.731964   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:02.732040   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:02.743541   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:02.755030   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:02.870301   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:03.995375   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125032803s)
	I0422 18:25:03.995447   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.230252   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.302979   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.395038   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:04.395115   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:03.502023   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.353882   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:04.353905   77634 pod_ready.go:81] duration metric: took 14.00823208s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:04.353915   77634 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:06.363356   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:08.363954   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.896176   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.396048   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.440071   77929 api_server.go:72] duration metric: took 1.045032787s to wait for apiserver process to appear ...
	I0422 18:25:05.440103   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:25:05.440148   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.759542   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.759577   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.759592   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.793255   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.793294   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.940652   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.945611   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:08.945646   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:09.440292   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.464743   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.464770   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:09.940470   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.946872   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.946900   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:10.441291   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:10.445834   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:25:10.452788   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:25:10.452814   77929 api_server.go:131] duration metric: took 5.012704724s to wait for apiserver health ...
	I0422 18:25:10.452823   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:10.452828   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:10.454695   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:25:10.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:13.361234   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:10.456234   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:25:10.469460   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:25:10.510297   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:25:10.527988   77929 system_pods.go:59] 8 kube-system pods found
	I0422 18:25:10.528034   77929 system_pods.go:61] "coredns-7db6d8ff4d-w968m" [1372c3d4-cb23-4f33-911b-57876688fcd4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:25:10.528044   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [af6c3f45-494d-469b-95e0-3d0842d07a70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:25:10.528051   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [665925b4-3073-41c2-86c0-12186f079459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:25:10.528057   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [e8661b67-89c5-43a6-b66e-828f637942e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:25:10.528061   77929 system_pods.go:61] "kube-proxy-4xvx2" [0e662ebe-1f6f-48fe-86c7-595b0bfa4bb6] Running
	I0422 18:25:10.528066   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [e6101593-2ee5-4765-b129-33b3ed7d4c98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:25:10.528075   77929 system_pods.go:61] "metrics-server-569cc877fc-l5qqw" [85eab808-f1f0-4fbc-9c54-1ae307226243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:25:10.528079   77929 system_pods.go:61] "storage-provisioner" [ba8465de-babc-4496-809f-68f6ec917ce8] Running
	I0422 18:25:10.528095   77929 system_pods.go:74] duration metric: took 17.768241ms to wait for pod list to return data ...
	I0422 18:25:10.528104   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:25:10.539169   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:25:10.539202   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:25:10.539214   77929 node_conditions.go:105] duration metric: took 11.105847ms to run NodePressure ...
	I0422 18:25:10.539237   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:10.808687   77929 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:25:10.815993   77929 kubeadm.go:733] kubelet initialised
	I0422 18:25:10.816025   77929 kubeadm.go:734] duration metric: took 7.302574ms waiting for restarted kubelet to initialise ...
	I0422 18:25:10.816037   77929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:25:10.824257   77929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:12.837255   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:16.704684   77400 start.go:364] duration metric: took 53.475319353s to acquireMachinesLock for "no-preload-407991"
	I0422 18:25:16.704741   77400 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:25:16.704752   77400 fix.go:54] fixHost starting: 
	I0422 18:25:16.705132   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:25:16.705166   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:25:16.721711   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0422 18:25:16.722127   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:25:16.722671   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:25:16.722693   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:25:16.723022   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:25:16.723220   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:16.723426   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:25:16.725197   77400 fix.go:112] recreateIfNeeded on no-preload-407991: state=Stopped err=<nil>
	I0422 18:25:16.725231   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	W0422 18:25:16.725430   77400 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:25:16.727275   77400 out.go:177] * Restarting existing kvm2 VM for "no-preload-407991" ...
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:15.362667   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:17.861622   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:14.831683   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:14.831706   77929 pod_ready.go:81] duration metric: took 4.007420508s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:14.831715   77929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343025   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:16.343056   77929 pod_ready.go:81] duration metric: took 1.511333532s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343070   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351244   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:17.351267   77929 pod_ready.go:81] duration metric: took 1.008189798s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351280   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:19.365025   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:16.728745   77400 main.go:141] libmachine: (no-preload-407991) Calling .Start
	I0422 18:25:16.728946   77400 main.go:141] libmachine: (no-preload-407991) Ensuring networks are active...
	I0422 18:25:16.729604   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network default is active
	I0422 18:25:16.729979   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network mk-no-preload-407991 is active
	I0422 18:25:16.730458   77400 main.go:141] libmachine: (no-preload-407991) Getting domain xml...
	I0422 18:25:16.731314   77400 main.go:141] libmachine: (no-preload-407991) Creating domain...
	I0422 18:25:18.079763   77400 main.go:141] libmachine: (no-preload-407991) Waiting to get IP...
	I0422 18:25:18.080862   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.081371   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.081401   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.081340   79353 retry.go:31] will retry after 226.494122ms: waiting for machine to come up
	I0422 18:25:18.309499   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.309914   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.310019   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.309900   79353 retry.go:31] will retry after 375.374338ms: waiting for machine to come up
	I0422 18:25:18.686507   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.687064   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.687093   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.687018   79353 retry.go:31] will retry after 341.714326ms: waiting for machine to come up
	I0422 18:25:19.030772   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.031261   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.031290   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.031229   79353 retry.go:31] will retry after 388.101939ms: waiting for machine to come up
	I0422 18:25:19.420994   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.421478   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.421500   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.421397   79353 retry.go:31] will retry after 732.485222ms: waiting for machine to come up
	I0422 18:25:20.155887   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:20.156717   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:20.156750   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:20.156665   79353 retry.go:31] will retry after 950.207106ms: waiting for machine to come up
	I0422 18:25:19.878966   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.364111   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:21.859384   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.362519   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.362552   77929 pod_ready.go:81] duration metric: took 5.011264858s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.362566   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371087   77929 pod_ready.go:92] pod "kube-proxy-4xvx2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.371112   77929 pod_ready.go:81] duration metric: took 8.534129ms for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371142   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376156   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.376183   77929 pod_ready.go:81] duration metric: took 5.03143ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376196   77929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:24.385435   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:21.108444   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:21.109056   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:21.109082   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:21.109004   79353 retry.go:31] will retry after 958.250136ms: waiting for machine to come up
	I0422 18:25:22.069541   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:22.070120   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:22.070144   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:22.070036   79353 retry.go:31] will retry after 989.607679ms: waiting for machine to come up
	I0422 18:25:23.061351   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:23.061877   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:23.061908   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:23.061823   79353 retry.go:31] will retry after 1.451989455s: waiting for machine to come up
	I0422 18:25:24.515233   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:24.515730   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:24.515755   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:24.515686   79353 retry.go:31] will retry after 2.303903602s: waiting for machine to come up
	I0422 18:25:24.365508   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.861066   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.389132   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:28.883625   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:26.821539   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:26.822006   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:26.822033   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:26.821950   79353 retry.go:31] will retry after 1.870697225s: waiting for machine to come up
	I0422 18:25:28.695072   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:28.695420   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:28.695466   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:28.695386   79353 retry.go:31] will retry after 2.327485176s: waiting for machine to come up
	I0422 18:25:28.861976   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:31.361339   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.883801   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:33.389422   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.024382   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:31.024817   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:31.024845   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:31.024786   79353 retry.go:31] will retry after 2.767538103s: waiting for machine to come up
	I0422 18:25:33.794390   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:33.794834   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:33.794872   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:33.794808   79353 retry.go:31] will retry after 5.661373675s: waiting for machine to come up
	I0422 18:25:33.860276   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.861770   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:38.361316   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.883098   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:37.883749   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.457864   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458407   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has current primary IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458447   77400 main.go:141] libmachine: (no-preload-407991) Found IP for machine: 192.168.39.164
	I0422 18:25:39.458492   77400 main.go:141] libmachine: (no-preload-407991) Reserving static IP address...
	I0422 18:25:39.458954   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.458980   77400 main.go:141] libmachine: (no-preload-407991) DBG | skip adding static IP to network mk-no-preload-407991 - found existing host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"}
	I0422 18:25:39.458992   77400 main.go:141] libmachine: (no-preload-407991) Reserved static IP address: 192.168.39.164
	I0422 18:25:39.459012   77400 main.go:141] libmachine: (no-preload-407991) Waiting for SSH to be available...
	I0422 18:25:39.459027   77400 main.go:141] libmachine: (no-preload-407991) DBG | Getting to WaitForSSH function...
	I0422 18:25:39.461404   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461715   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.461746   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461875   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH client type: external
	I0422 18:25:39.461906   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa (-rw-------)
	I0422 18:25:39.461956   77400 main.go:141] libmachine: (no-preload-407991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:39.461974   77400 main.go:141] libmachine: (no-preload-407991) DBG | About to run SSH command:
	I0422 18:25:39.461992   77400 main.go:141] libmachine: (no-preload-407991) DBG | exit 0
	I0422 18:25:39.591446   77400 main.go:141] libmachine: (no-preload-407991) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:39.591795   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetConfigRaw
	I0422 18:25:39.592473   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.594928   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595379   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.595414   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595632   77400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:25:39.595890   77400 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:39.595914   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:39.596103   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.598532   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.598899   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.598929   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.599071   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.599270   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599450   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599592   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.599728   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.599927   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.599942   77400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:39.712043   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:39.712081   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712336   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:25:39.712363   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712548   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.715474   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.715936   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.715960   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.716089   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.716265   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716396   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716530   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.716656   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.716860   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.716874   77400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407991 && echo "no-preload-407991" | sudo tee /etc/hostname
	I0422 18:25:39.845921   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407991
	
	I0422 18:25:39.845959   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.848790   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849093   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.849121   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849288   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.849495   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849638   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849817   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.850014   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.850183   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.850200   77400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407991/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:39.977389   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:39.977427   77400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:39.977447   77400 buildroot.go:174] setting up certificates
	I0422 18:25:39.977456   77400 provision.go:84] configureAuth start
	I0422 18:25:39.977468   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.977754   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.980800   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981266   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.981305   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981458   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.984031   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984478   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.984510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984654   77400 provision.go:143] copyHostCerts
	I0422 18:25:39.984713   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:39.984725   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:39.984788   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:39.984907   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:39.984918   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:39.984952   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:39.985038   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:39.985048   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:39.985076   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:39.985158   77400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.no-preload-407991 san=[127.0.0.1 192.168.39.164 localhost minikube no-preload-407991]
	I0422 18:25:40.224235   77400 provision.go:177] copyRemoteCerts
	I0422 18:25:40.224306   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:40.224352   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.227355   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.227814   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.227842   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.228035   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.228232   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.228392   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.228560   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.318916   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:40.346168   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:40.371490   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:25:40.396866   77400 provision.go:87] duration metric: took 419.381117ms to configureAuth
	I0422 18:25:40.396899   77400 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:40.397067   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:25:40.397130   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.399642   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400060   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.400095   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400269   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.400466   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400652   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400832   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.401018   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.401176   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.401191   77400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:40.698107   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:40.698140   77400 machine.go:97] duration metric: took 1.102235221s to provisionDockerMachine
	I0422 18:25:40.698153   77400 start.go:293] postStartSetup for "no-preload-407991" (driver="kvm2")
	I0422 18:25:40.698171   77400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:40.698187   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.698497   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:40.698532   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.701545   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.701933   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.701964   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.702070   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.702295   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.702492   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.702727   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.800538   77400 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:40.805027   77400 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:40.805060   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:40.805133   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:40.805216   77400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:40.805304   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:40.816872   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:40.843857   77400 start.go:296] duration metric: took 145.69044ms for postStartSetup
	I0422 18:25:40.843896   77400 fix.go:56] duration metric: took 24.13914409s for fixHost
	I0422 18:25:40.843914   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.846770   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847148   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.847184   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847391   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.847605   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847778   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847966   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.848199   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.848382   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.848396   77400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:40.964440   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810340.939149386
	
	I0422 18:25:40.964473   77400 fix.go:216] guest clock: 1713810340.939149386
	I0422 18:25:40.964483   77400 fix.go:229] Guest: 2024-04-22 18:25:40.939149386 +0000 UTC Remote: 2024-04-22 18:25:40.843899302 +0000 UTC m=+360.205454093 (delta=95.250084ms)
	I0422 18:25:40.964508   77400 fix.go:200] guest clock delta is within tolerance: 95.250084ms
	I0422 18:25:40.964513   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 24.259798286s
	I0422 18:25:40.964535   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.964813   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:40.967510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.967906   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.967932   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.968087   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968610   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968782   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968866   77400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:40.968910   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.969047   77400 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:40.969074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.971818   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972039   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972190   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972203   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972394   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972565   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972580   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972594   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972733   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972791   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.972875   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972948   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.973062   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.973206   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:41.092004   77400 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:41.098574   77400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:41.242800   77400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:41.250454   77400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:41.250521   77400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:41.267380   77400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:41.267408   77400 start.go:494] detecting cgroup driver to use...
	I0422 18:25:41.267478   77400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:41.284742   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:41.299527   77400 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:41.299596   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:41.314189   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:41.329444   77400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:41.456719   77400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:41.628305   77400 docker.go:233] disabling docker service ...
	I0422 18:25:41.628376   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:41.643226   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:41.657578   77400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:41.780449   77400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:41.898823   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:41.913578   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:41.933621   77400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:25:41.933679   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.944309   77400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:41.944382   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.955308   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.966445   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.977509   77400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:41.989479   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.001915   77400 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.020554   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.033225   77400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:42.044177   77400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:42.044231   77400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:42.060403   77400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:42.071760   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:42.213747   77400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:42.361818   77400 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:42.361911   77400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:42.367211   77400 start.go:562] Will wait 60s for crictl version
	I0422 18:25:42.367265   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.371042   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:42.408686   77400 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:42.408773   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.438447   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.469117   77400 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:25:40.862849   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.361826   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:39.884361   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:41.885199   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.885865   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.470665   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:42.473467   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.473845   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:42.473871   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.474121   77400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:42.478401   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:42.491034   77400 kubeadm.go:877] updating cluster {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:42.491163   77400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:25:42.491203   77400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:42.530418   77400 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:25:42.530443   77400 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.530585   77400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.530641   77400 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0422 18:25:42.530601   77400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.530609   77400 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.530622   77400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.530626   77400 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532108   77400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.532136   77400 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0422 18:25:42.532111   77400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.532113   77400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.532175   77400 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532197   77400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.532223   77400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.532506   77400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.735366   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.750777   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0422 18:25:42.758260   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.759633   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.763447   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.765416   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.803799   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.832904   77400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0422 18:25:42.832959   77400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.833021   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981471   77400 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0422 18:25:42.981528   77400 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.981553   77400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0422 18:25:42.981584   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981592   77400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.981635   77400 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0422 18:25:42.981663   77400 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.981687   77400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0422 18:25:42.981699   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981642   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981716   77400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.981770   77400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0422 18:25:42.981776   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981788   77400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.981820   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981846   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:43.021364   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0422 18:25:43.021416   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:43.021455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.021460   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:43.021529   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:43.021534   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:43.021585   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:43.130300   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0422 18:25:43.130373   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0422 18:25:43.130408   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:43.130425   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0422 18:25:43.130455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:43.130514   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:43.134769   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0422 18:25:43.134785   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0422 18:25:43.134797   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134839   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134853   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:43.134882   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0422 18:25:43.134959   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:43.142273   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0422 18:25:43.142486   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0422 18:25:43.142837   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0422 18:25:43.840108   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210614   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.075740127s)
	I0422 18:25:45.210650   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0422 18:25:45.210655   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.075789371s)
	I0422 18:25:45.210676   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0422 18:25:45.210693   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.075715404s)
	I0422 18:25:45.210699   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210706   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0422 18:25:45.210748   77400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.370610047s)
	I0422 18:25:45.210785   77400 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0422 18:25:45.210750   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210842   77400 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210969   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:45.363082   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:47.861802   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:46.383938   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:48.385209   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.203063   77400 ssh_runner.go:235] Completed: which crictl: (2.992066474s)
	I0422 18:25:48.203106   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.992228832s)
	I0422 18:25:48.203143   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0422 18:25:48.203159   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:48.203171   77400 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:48.203210   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:49.863963   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:52.370507   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.883608   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:53.386229   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.419429   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.216195193s)
	I0422 18:25:52.419462   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0422 18:25:52.419474   77400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.216288559s)
	I0422 18:25:52.419488   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419513   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0422 18:25:52.419537   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419581   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:52.424638   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0422 18:25:53.873720   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.454157304s)
	I0422 18:25:53.873750   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0422 18:25:53.873780   77400 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:53.873825   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:54.860810   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:56.864272   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.388103   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:57.887970   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.955181   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.081308071s)
	I0422 18:25:55.955210   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0422 18:25:55.955236   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:55.955300   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:58.218734   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.263410883s)
	I0422 18:25:58.218762   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0422 18:25:58.218792   77400 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:58.218843   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:59.071398   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0422 18:25:59.071443   77400 cache_images.go:123] Successfully loaded all cached images
	I0422 18:25:59.071450   77400 cache_images.go:92] duration metric: took 16.54097573s to LoadCachedImages
	I0422 18:25:59.071463   77400 kubeadm.go:928] updating node { 192.168.39.164 8443 v1.30.0 crio true true} ...
	I0422 18:25:59.071610   77400 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-407991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:59.071698   77400 ssh_runner.go:195] Run: crio config
	I0422 18:25:59.125757   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:25:59.125783   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:59.125800   77400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:59.125832   77400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-407991 NodeName:no-preload-407991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:59.126001   77400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-407991"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:59.126073   77400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:59.137254   77400 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:59.137320   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:59.146983   77400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0422 18:25:59.165207   77400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:59.182898   77400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0422 18:25:59.201735   77400 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:59.206108   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:59.219642   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:59.336565   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:59.356844   77400 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991 for IP: 192.168.39.164
	I0422 18:25:59.356873   77400 certs.go:194] generating shared ca certs ...
	I0422 18:25:59.356893   77400 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:59.357058   77400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:59.357121   77400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:59.357133   77400 certs.go:256] generating profile certs ...
	I0422 18:25:59.357209   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.key
	I0422 18:25:59.357329   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key.6aa1268b
	I0422 18:25:59.357413   77400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key
	I0422 18:25:59.357574   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:59.357616   77400 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:59.357631   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:59.357672   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:59.357707   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:59.357745   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:59.357823   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:59.358765   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:59.395982   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:59.430445   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:59.465415   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:59.502678   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:25:59.538225   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:25:59.570635   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:59.596096   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:59.622051   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:59.647372   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:59.673650   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:59.699515   77400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:59.717253   77400 ssh_runner.go:195] Run: openssl version
	I0422 18:25:59.723704   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:59.735265   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740264   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740319   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.746445   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:59.757879   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:59.769243   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774505   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774562   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.780572   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:59.793472   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:59.805187   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810148   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810191   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.816350   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:59.828208   77400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:59.832799   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:59.838952   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:59.845145   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:59.851309   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:59.857643   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:59.864892   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:59.873625   77400 kubeadm.go:391] StartCluster: {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:59.873749   77400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:59.873826   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.913578   77400 cri.go:89] found id: ""
	I0422 18:25:59.913656   77400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:59.925105   77400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:59.925131   77400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:59.925138   77400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:59.925192   77400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:59.935942   77400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:59.937363   77400 kubeconfig.go:125] found "no-preload-407991" server: "https://192.168.39.164:8443"
	I0422 18:25:59.939672   77400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:59.949774   77400 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.164
	I0422 18:25:59.949810   77400 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:59.949841   77400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:59.949896   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.989385   77400 cri.go:89] found id: ""
	I0422 18:25:59.989443   77400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:26:00.005985   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:26:00.016873   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:26:00.016897   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:26:00.016953   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:26:00.027119   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:26:00.027205   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:26:00.038360   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:26:00.048176   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:26:00.048246   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:26:00.058861   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.068955   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:26:00.069018   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.079147   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:26:00.089400   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:26:00.089477   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:26:00.100245   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:26:00.111040   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:00.224436   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:59.362215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:01.860196   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.388433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:02.883211   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.838456   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.057201   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.143346   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.294896   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:26:01.295031   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.795945   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.296085   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.324434   77400 api_server.go:72] duration metric: took 1.029539423s to wait for apiserver process to appear ...
	I0422 18:26:02.324467   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:26:02.324490   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.784948   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:26:04.784984   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:26:04.784997   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.844019   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.844064   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:04.844084   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.848805   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.848838   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.325458   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.332351   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.332410   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.824785   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.830293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.830318   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:06.325380   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:06.332804   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:26:06.344083   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:26:06.344110   77400 api_server.go:131] duration metric: took 4.019636154s to wait for apiserver health ...
	I0422 18:26:06.344118   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:26:06.344123   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:26:06.345875   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:26:03.863020   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:06.360428   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:04.884648   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:07.382356   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:09.388391   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.347812   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:26:06.361087   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:26:06.385654   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:26:06.398331   77400 system_pods.go:59] 8 kube-system pods found
	I0422 18:26:06.398372   77400 system_pods.go:61] "coredns-7db6d8ff4d-2p2sr" [3f42ce46-e76d-4bc8-9dd5-463a08948e4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:26:06.398384   77400 system_pods.go:61] "etcd-no-preload-407991" [96ae7feb-802f-44a8-81fc-5ea5de12e73b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:26:06.398396   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [28010e33-49a1-4c6b-90f9-939ede3ed97e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:26:06.398404   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [1e7db029-2196-499f-bc88-d780d065f80c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:26:06.398415   77400 system_pods.go:61] "kube-proxy-767q4" [1c6d01b0-caf0-4d52-8da8-caad7b158012] Running
	I0422 18:26:06.398426   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [3ef8d145-d90e-455d-98fe-de9e6080a178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:26:06.398433   77400 system_pods.go:61] "metrics-server-569cc877fc-jmjhm" [d831b01b-af2e-4c7f-944c-e768d724ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:26:06.398439   77400 system_pods.go:61] "storage-provisioner" [db8196df-a394-4e10-9db7-c10414833af3] Running
	I0422 18:26:06.398447   77400 system_pods.go:74] duration metric: took 12.770066ms to wait for pod list to return data ...
	I0422 18:26:06.398455   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:26:06.402125   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:26:06.402158   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:26:06.402170   77400 node_conditions.go:105] duration metric: took 3.709194ms to run NodePressure ...
	I0422 18:26:06.402195   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:06.676133   77400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680247   77400 kubeadm.go:733] kubelet initialised
	I0422 18:26:06.680269   77400 kubeadm.go:734] duration metric: took 4.114413ms waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680276   77400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:26:06.687275   77400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.693967   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.693986   77400 pod_ready.go:81] duration metric: took 6.687466ms for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.694004   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.694012   77400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.698539   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698562   77400 pod_ready.go:81] duration metric: took 4.539271ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.698571   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698578   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.703382   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703407   77400 pod_ready.go:81] duration metric: took 4.822601ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.703418   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703425   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.789413   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789449   77400 pod_ready.go:81] duration metric: took 86.014056ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.789459   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789465   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189544   77400 pod_ready.go:92] pod "kube-proxy-767q4" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:07.189572   77400 pod_ready.go:81] duration metric: took 400.096716ms for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189585   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:09.201757   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:08.861714   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.359820   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.362303   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.883726   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:14.382966   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.697196   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.697458   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.861378   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:17.861808   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:16.385523   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:18.883000   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.697079   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:15.697104   77400 pod_ready.go:81] duration metric: took 8.507511233s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:15.697116   77400 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:17.704095   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.204276   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.360946   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:22.861202   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.883107   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:23.383119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.204697   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.703926   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.861797   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.361089   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.384161   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.386172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:27.204128   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.207057   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.364913   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.883085   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.883536   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.883927   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:31.704123   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:34.206443   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.863261   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.360525   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.361132   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.385507   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.882649   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:36.704473   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:39.204839   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.863837   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.362000   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.887004   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.384351   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:41.704418   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.704878   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.362073   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.860279   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.385285   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.882420   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:45.706474   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:48.205681   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.860748   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.360586   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.884553   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:51.885527   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.387502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:50.703624   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.704794   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.204187   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.861314   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:57.360430   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:56.882974   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:58.884770   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:57.703613   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:00.205449   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:59.861376   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.862613   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.386313   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:03.883728   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:02.205610   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.704158   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.359865   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:06.359968   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.360935   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:05.884072   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.386493   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:07.203802   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:09.204754   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.862854   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.361215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.883779   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:12.883989   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.205525   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.706785   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:15.861009   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.861214   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.884371   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.384911   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:16.204000   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:18.704622   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.864836   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:22.360984   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.883393   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:21.885743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.384476   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.704821   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:23.203548   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:25.204601   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.361665   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.361708   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.388979   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.882505   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:27.714190   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.204420   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.861728   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:31.360750   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.360969   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.883051   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.386563   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:32.704424   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:34.705215   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.861184   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.361048   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.883602   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.383601   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:37.203082   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.204563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.860739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.861544   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.385271   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.389273   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:41.705615   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.203947   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.361656   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:47.860929   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.882684   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:46.886119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:49.382017   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:46.703083   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:48.705413   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:50.361465   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:52.364509   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.384561   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.882657   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.203623   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.205834   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.863919   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.360796   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:56.383743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:58.383783   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.703110   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.704061   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.704421   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.361229   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:01.362273   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.385750   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:02.886349   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:02.203877   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:04.204896   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:03.860506   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.860713   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.384180   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.884714   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:06.206560   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.703406   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.364348   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.861760   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.361127   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.392330   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:12.883081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:10.705202   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.203760   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.205898   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.361423   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:17.363341   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:14.883352   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.886433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.382478   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:17.703917   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.704958   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.861537   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.862949   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.882158   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:23.885280   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.204356   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.703765   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.361251   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:26.862825   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.886291   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:28.382905   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.705590   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.205641   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.361439   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:31.361534   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:30.383502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.887006   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:31.704145   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.704841   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.860769   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.860833   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.861583   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.382871   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.383164   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:36.204210   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:38.205076   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:40.360574   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.862692   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:39.882407   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.882939   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:43.883102   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:40.703806   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.704426   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.707600   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.361890   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.860686   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.883300   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.884863   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:47.207153   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.704440   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.863696   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.360739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.384172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.386468   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.704563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.206032   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.362075   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.861096   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.883667   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:57.386081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:56.709043   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.203924   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:58.861932   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.360398   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.360867   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.882692   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.883267   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.383872   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:01.204827   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.204916   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.355065   77634 pod_ready.go:81] duration metric: took 4m0.0011361s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:04.355113   77634 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:04.355148   77634 pod_ready.go:38] duration metric: took 4m14.498231749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:04.355180   77634 kubeadm.go:591] duration metric: took 4m21.764385121s to restartPrimaryControlPlane
	W0422 18:29:04.355236   77634 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:04.355261   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:06.385395   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:08.883604   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:05.703491   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:07.704534   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.705814   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:11.383569   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:13.883521   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:11.706608   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:14.204779   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:16.384284   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.884060   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.704652   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.704899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:21.391438   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:22.376750   77929 pod_ready.go:81] duration metric: took 4m0.000534542s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:22.376787   77929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:22.376811   77929 pod_ready.go:38] duration metric: took 4m11.560762914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:22.376844   77929 kubeadm.go:591] duration metric: took 4m19.827120959s to restartPrimaryControlPlane
	W0422 18:29:22.376929   77929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:22.376953   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.203699   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:23.204704   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:25.206832   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:27.706178   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:30.203899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:29:32.204107   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:34.707228   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:36.541281   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.185998541s)
	I0422 18:29:36.541367   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:36.558729   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:36.569635   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:36.579901   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:36.579919   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:36.579959   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:36.589540   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:36.589602   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:36.600704   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:36.610945   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:36.611012   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:36.621316   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.631251   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:36.631305   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.641661   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:36.650970   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:36.651049   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:36.661012   77634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:36.717676   77634 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:36.717771   77634 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:36.861264   77634 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:36.861404   77634 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:36.861534   77634 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:37.083032   77634 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:37.084958   77634 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:37.085069   77634 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:37.085179   77634 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:37.085296   77634 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:37.085387   77634 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:37.085505   77634 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:37.085579   77634 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:37.085665   77634 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:37.085748   77634 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:37.085869   77634 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:37.085985   77634 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:37.086037   77634 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:37.086114   77634 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:37.337747   77634 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:37.538036   77634 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:37.630303   77634 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:37.755713   77634 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:38.081451   77634 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:38.082265   77634 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:38.084958   77634 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:38.086755   77634 out.go:204]   - Booting up control plane ...
	I0422 18:29:38.086893   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:38.087023   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:38.089714   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:38.108313   77634 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:38.108786   77634 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:38.108849   77634 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:38.241537   77634 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:38.241681   77634 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:37.203550   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:39.205619   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:38.743798   77634 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.847818ms
	I0422 18:29:38.743910   77634 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:44.245440   77634 kubeadm.go:309] [api-check] The API server is healthy after 5.501913498s
	I0422 18:29:44.265283   77634 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:29:44.280940   77634 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:29:44.318688   77634 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:29:44.318990   77634 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-782377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:29:44.332201   77634 kubeadm.go:309] [bootstrap-token] Using token: o52gh5.f6sjmkidroy1sl61
	I0422 18:29:44.333546   77634 out.go:204]   - Configuring RBAC rules ...
	I0422 18:29:44.333670   77634 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:29:44.342847   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:29:44.350983   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:29:44.354214   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:29:44.361351   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:29:44.365170   77634 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:29:44.654414   77634 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:29:45.170247   77634 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:29:45.654714   77634 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:29:45.654744   77634 kubeadm.go:309] 
	I0422 18:29:45.654847   77634 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:29:45.654871   77634 kubeadm.go:309] 
	I0422 18:29:45.654984   77634 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:29:45.654996   77634 kubeadm.go:309] 
	I0422 18:29:45.655028   77634 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:29:45.655108   77634 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:29:45.655201   77634 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:29:45.655211   77634 kubeadm.go:309] 
	I0422 18:29:45.655308   77634 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:29:45.655317   77634 kubeadm.go:309] 
	I0422 18:29:45.655395   77634 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:29:45.655414   77634 kubeadm.go:309] 
	I0422 18:29:45.655486   77634 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:29:45.655597   77634 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:29:45.655700   77634 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:29:45.655714   77634 kubeadm.go:309] 
	I0422 18:29:45.655824   77634 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:29:45.655951   77634 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:29:45.655963   77634 kubeadm.go:309] 
	I0422 18:29:45.656067   77634 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656226   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:29:45.656258   77634 kubeadm.go:309] 	--control-plane 
	I0422 18:29:45.656265   77634 kubeadm.go:309] 
	I0422 18:29:45.656383   77634 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:29:45.656394   77634 kubeadm.go:309] 
	I0422 18:29:45.656513   77634 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656602   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:29:45.657124   77634 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:29:45.657152   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:29:45.657168   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:29:45.658873   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:29:41.705450   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:44.205661   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:45.660184   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:29:45.671834   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:29:45.693947   77634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:29:45.694034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:45.694054   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-782377 minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=embed-certs-782377 minikube.k8s.io/primary=true
	I0422 18:29:45.901437   77634 ops.go:34] apiserver oom_adj: -16
	I0422 18:29:45.901443   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.402050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.902222   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.402527   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.901535   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.206598   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.703899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.401738   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:48.902497   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.402046   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.901756   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.402023   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.901600   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.401905   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.901739   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.401859   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.902155   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.661872   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.28489375s)
	I0422 18:29:54.661952   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:54.679790   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:54.689947   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:54.700173   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:54.700191   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:54.700230   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:29:54.711462   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:54.711519   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:54.721157   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:29:54.730698   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:54.730769   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:54.740596   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.750450   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:54.750521   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.760582   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:29:54.770551   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:54.770608   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:54.781181   77929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:54.834872   77929 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:54.834950   77929 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:54.982435   77929 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:54.982574   77929 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:54.982675   77929 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:55.208724   77929 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:50.704498   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:53.203270   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.206485   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.210946   77929 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:55.211072   77929 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:55.211180   77929 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:55.211326   77929 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:55.211425   77929 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:55.211546   77929 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:55.211655   77929 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:55.211746   77929 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:55.211831   77929 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:55.211932   77929 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:55.212028   77929 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:55.212076   77929 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:55.212150   77929 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:55.456090   77929 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:55.747103   77929 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:55.940962   77929 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:56.076850   77929 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:56.253326   77929 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:56.253921   77929 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:56.259311   77929 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:53.402196   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:53.902328   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.402353   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.901736   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.401514   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.902415   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.402371   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.902117   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.401817   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.902050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.402034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.574005   77634 kubeadm.go:1107] duration metric: took 12.880033802s to wait for elevateKubeSystemPrivileges
	W0422 18:29:58.574051   77634 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:29:58.574061   77634 kubeadm.go:393] duration metric: took 5m16.036878933s to StartCluster
	I0422 18:29:58.574083   77634 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.574173   77634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:29:58.576621   77634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.576908   77634 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:29:58.578444   77634 out.go:177] * Verifying Kubernetes components...
	I0422 18:29:58.576967   77634 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:29:58.577120   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:29:58.579836   77634 addons.go:69] Setting default-storageclass=true in profile "embed-certs-782377"
	I0422 18:29:58.579846   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:29:58.579850   77634 addons.go:69] Setting metrics-server=true in profile "embed-certs-782377"
	I0422 18:29:58.579873   77634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-782377"
	I0422 18:29:58.579896   77634 addons.go:234] Setting addon metrics-server=true in "embed-certs-782377"
	W0422 18:29:58.579910   77634 addons.go:243] addon metrics-server should already be in state true
	I0422 18:29:58.579952   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.579841   77634 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-782377"
	I0422 18:29:58.580057   77634 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-782377"
	W0422 18:29:58.580070   77634 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:29:58.580099   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.580279   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580284   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580301   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580308   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580460   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580488   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.603276   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0422 18:29:58.603459   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0422 18:29:58.603483   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 18:29:58.607248   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607265   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607392   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607836   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.607853   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.607983   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.608001   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.608344   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608373   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608505   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.608932   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.608963   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612034   77634 addons.go:234] Setting addon default-storageclass=true in "embed-certs-782377"
	W0422 18:29:58.612056   77634 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:29:58.612084   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.612467   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.612485   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612786   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.612802   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.613185   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.613700   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.613728   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.630170   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0422 18:29:58.630586   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.631061   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.631081   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.631523   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.631693   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.631847   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0422 18:29:58.632457   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.632941   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.632966   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.633179   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0422 18:29:58.633322   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.633567   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.633688   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.635830   77634 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:29:58.633856   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.634354   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.636961   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.637004   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:29:58.637027   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:29:58.637045   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.637006   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.637294   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.637508   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.639287   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.640999   77634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:29:58.640236   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:56.261447   77929 out.go:204]   - Booting up control plane ...
	I0422 18:29:56.261539   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:56.261635   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:56.261736   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:56.285519   77929 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:56.285675   77929 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:56.285752   77929 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:56.437635   77929 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:56.437767   77929 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:56.944001   77929 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 506.500244ms
	I0422 18:29:56.944104   77929 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:58.640741   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.642428   77634 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.641034   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.642448   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:29:58.642456   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.642470   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.642590   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.642733   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.642860   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.645684   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646424   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.646469   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646728   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.646929   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.647079   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.647331   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.657385   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 18:29:58.658062   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.658658   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.658676   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.659065   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.659314   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.661001   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.661274   77634 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:58.661292   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:29:58.661309   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.664551   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.665029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665185   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.665397   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.665560   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.665692   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.840086   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:29:58.872963   77634 node_ready.go:35] waiting up to 6m0s for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882942   77634 node_ready.go:49] node "embed-certs-782377" has status "Ready":"True"
	I0422 18:29:58.882978   77634 node_ready.go:38] duration metric: took 9.978929ms for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882990   77634 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:58.892484   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:29:58.964679   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.987690   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:59.001748   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:29:59.001776   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:29:59.095009   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:29:59.095039   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:29:59.242427   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.242451   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:29:59.321464   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.989825   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025095721s)
	I0422 18:29:59.989883   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.989895   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.989828   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.002098611s)
	I0422 18:29:59.989974   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990005   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990193   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990231   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990239   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990247   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990254   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990306   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990341   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990355   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990369   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990380   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990504   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990523   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990572   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990588   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.025628   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.025655   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.025970   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.025991   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.434245   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.434287   77634 pod_ready.go:81] duration metric: took 1.54176792s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.434301   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454521   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.454545   77634 pod_ready.go:81] duration metric: took 20.235494ms for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454557   77634 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.473166   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.151631277s)
	I0422 18:30:00.473225   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473266   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473625   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.473660   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.473683   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.473706   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473719   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473998   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.474079   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.474098   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.474114   77634 addons.go:470] Verifying addon metrics-server=true in "embed-certs-782377"
	I0422 18:30:00.476224   77634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:29:57.706757   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.206098   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.477945   77634 addons.go:505] duration metric: took 1.900979481s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:30:00.493925   77634 pod_ready.go:92] pod "etcd-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.493956   77634 pod_ready.go:81] duration metric: took 39.391277ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.493971   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502733   77634 pod_ready.go:92] pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.502762   77634 pod_ready.go:81] duration metric: took 8.782315ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502776   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517227   77634 pod_ready.go:92] pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.517249   77634 pod_ready.go:81] duration metric: took 14.465418ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517260   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881221   77634 pod_ready.go:92] pod "kube-proxy-6qsdm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.881245   77634 pod_ready.go:81] duration metric: took 363.979231ms for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881254   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277017   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:01.277103   77634 pod_ready.go:81] duration metric: took 395.840808ms for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277125   77634 pod_ready.go:38] duration metric: took 2.394112246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:01.277153   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:01.277240   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:01.295278   77634 api_server.go:72] duration metric: took 2.718332063s to wait for apiserver process to appear ...
	I0422 18:30:01.295316   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:01.295345   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:30:01.299754   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:30:01.300888   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:01.300912   77634 api_server.go:131] duration metric: took 5.588825ms to wait for apiserver health ...
	I0422 18:30:01.300920   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:01.480184   77634 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:01.480216   77634 system_pods.go:61] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.480220   77634 system_pods.go:61] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.480224   77634 system_pods.go:61] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.480227   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.480231   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.480234   77634 system_pods.go:61] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.480237   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.480243   77634 system_pods.go:61] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.480246   77634 system_pods.go:61] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.480253   77634 system_pods.go:74] duration metric: took 179.327678ms to wait for pod list to return data ...
	I0422 18:30:01.480260   77634 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:01.676749   77634 default_sa.go:45] found service account: "default"
	I0422 18:30:01.676792   77634 default_sa.go:55] duration metric: took 196.525393ms for default service account to be created ...
	I0422 18:30:01.676805   77634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:01.881811   77634 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:01.881846   77634 system_pods.go:89] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.881852   77634 system_pods.go:89] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.881856   77634 system_pods.go:89] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.881861   77634 system_pods.go:89] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.881866   77634 system_pods.go:89] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.881871   77634 system_pods.go:89] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.881875   77634 system_pods.go:89] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.881884   77634 system_pods.go:89] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.881891   77634 system_pods.go:89] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.881902   77634 system_pods.go:126] duration metric: took 205.08856ms to wait for k8s-apps to be running ...
	I0422 18:30:01.881915   77634 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:01.881971   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:01.898653   77634 system_svc.go:56] duration metric: took 16.727076ms WaitForService to wait for kubelet
	I0422 18:30:01.898688   77634 kubeadm.go:576] duration metric: took 3.321747224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:01.898716   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:02.079527   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:02.079552   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:02.079567   77634 node_conditions.go:105] duration metric: took 180.844523ms to run NodePressure ...
	I0422 18:30:02.079581   77634 start.go:240] waiting for startup goroutines ...
	I0422 18:30:02.079590   77634 start.go:245] waiting for cluster config update ...
	I0422 18:30:02.079603   77634 start.go:254] writing updated cluster config ...
	I0422 18:30:02.079881   77634 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:02.131965   77634 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:02.133816   77634 out.go:177] * Done! kubectl is now configured to use "embed-certs-782377" cluster and "default" namespace by default
	I0422 18:30:02.446649   77929 kubeadm.go:309] [api-check] The API server is healthy after 5.502662802s
	I0422 18:30:02.466311   77929 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:02.504029   77929 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:02.586946   77929 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:02.587250   77929 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-856422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:02.600362   77929 kubeadm.go:309] [bootstrap-token] Using token: f03yx2.2vmzf4rav70vm6gm
	I0422 18:30:02.601830   77929 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:02.601961   77929 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:02.608688   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:02.621264   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:02.625695   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:02.630424   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:02.639203   77929 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:02.856167   77929 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:03.309505   77929 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:03.855419   77929 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:03.855443   77929 kubeadm.go:309] 
	I0422 18:30:03.855541   77929 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:03.855567   77929 kubeadm.go:309] 
	I0422 18:30:03.855643   77929 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:03.855653   77929 kubeadm.go:309] 
	I0422 18:30:03.855688   77929 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:03.855756   77929 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:03.855841   77929 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:03.855854   77929 kubeadm.go:309] 
	I0422 18:30:03.855909   77929 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:03.855915   77929 kubeadm.go:309] 
	I0422 18:30:03.855954   77929 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:03.855960   77929 kubeadm.go:309] 
	I0422 18:30:03.856051   77929 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:03.856171   77929 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:03.856248   77929 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:03.856259   77929 kubeadm.go:309] 
	I0422 18:30:03.856390   77929 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:03.856484   77929 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:03.856496   77929 kubeadm.go:309] 
	I0422 18:30:03.856636   77929 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.856729   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:03.856749   77929 kubeadm.go:309] 	--control-plane 
	I0422 18:30:03.856755   77929 kubeadm.go:309] 
	I0422 18:30:03.856823   77929 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:03.856829   77929 kubeadm.go:309] 
	I0422 18:30:03.856911   77929 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.857040   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:03.857540   77929 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:03.857569   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:30:03.857583   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:03.859350   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:03.860736   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:03.873189   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:03.897193   77929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:03.897260   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:03.897317   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-856422 minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=default-k8s-diff-port-856422 minikube.k8s.io/primary=true
	I0422 18:30:04.114339   77929 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:04.114499   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:02.703452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.705502   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.615355   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.115530   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.614776   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.114991   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.614772   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.114921   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.614799   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.115218   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.614688   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:09.114578   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.203762   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.704636   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.615201   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.115526   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.614511   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.115041   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.615220   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.115463   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.614937   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.115470   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.615417   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:14.114916   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:11.706452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.203931   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.614582   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.115466   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.615542   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.115554   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.614586   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.114645   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.614945   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.769793   77929 kubeadm.go:1107] duration metric: took 13.872592974s to wait for elevateKubeSystemPrivileges
	W0422 18:30:17.769857   77929 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:30:17.769869   77929 kubeadm.go:393] duration metric: took 5m15.279261637s to StartCluster
	I0422 18:30:17.769889   77929 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.769958   77929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:30:17.771921   77929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.772222   77929 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:30:17.774219   77929 out.go:177] * Verifying Kubernetes components...
	I0422 18:30:17.772365   77929 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:30:17.772496   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:30:17.776231   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:30:17.776249   77929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776267   77929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776294   77929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776307   77929 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:30:17.776321   77929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-856422"
	I0422 18:30:17.776343   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776284   77929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776412   77929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776430   77929 addons.go:243] addon metrics-server should already be in state true
	I0422 18:30:17.776469   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776775   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776809   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776778   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776846   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776777   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776926   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.796665   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0422 18:30:17.796701   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0422 18:30:17.796976   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0422 18:30:17.797083   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797472   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797609   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797795   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.797824   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798111   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798141   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798158   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798499   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798627   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798648   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798728   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.798776   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799001   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.799077   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.799107   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799274   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.803095   77929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.803141   77929 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:30:17.803175   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.803544   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.803580   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.820753   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0422 18:30:17.821272   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.821822   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.821839   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.822247   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.822315   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0422 18:30:17.822640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.823287   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0422 18:30:17.823373   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.823976   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.824141   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824152   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824479   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824498   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824561   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.824727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.825176   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.825646   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.825675   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.826014   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.828122   77929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:30:17.826808   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.829694   77929 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:17.829711   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:30:17.829729   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.831322   77929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:30:17.834942   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:30:17.834959   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:30:17.834979   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.833531   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.832894   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835054   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.835078   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.835468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.835674   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.837838   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838180   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.838204   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838459   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.838656   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.838827   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.838983   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.844804   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0422 18:30:17.845252   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.845762   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.845780   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.846071   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.846240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.847881   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.848127   77929 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:17.848142   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:30:17.848159   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.850959   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851369   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.851389   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.851786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.851918   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.852081   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.997608   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:30:18.066476   77929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.139937   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:18.141619   77929 node_ready.go:49] node "default-k8s-diff-port-856422" has status "Ready":"True"
	I0422 18:30:18.141645   77929 node_ready.go:38] duration metric: took 75.13675ms for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.141658   77929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:18.168289   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:18.217351   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:30:18.217374   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:30:18.280089   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:18.283704   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:30:18.283734   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:30:18.314907   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.314936   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:30:18.379950   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.595931   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.595969   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596350   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596374   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.596389   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596660   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596699   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596722   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610244   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.610277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.610614   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.610635   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610659   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:19.513892   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233747961s)
	I0422 18:30:19.513948   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.513961   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514326   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.514460   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.514491   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.514506   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514414   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517601   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.517617   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.805551   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425552646s)
	I0422 18:30:19.805610   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.805621   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.805986   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.806040   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.806064   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.806083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.807818   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.807865   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.807874   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.807889   77929 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-856422"
	I0422 18:30:19.809871   77929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0422 18:30:15.697614   77400 pod_ready.go:81] duration metric: took 4m0.000479463s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	E0422 18:30:15.697661   77400 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:30:15.697678   77400 pod_ready.go:38] duration metric: took 4m9.017394523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:15.697704   77400 kubeadm.go:591] duration metric: took 4m15.772560858s to restartPrimaryControlPlane
	W0422 18:30:15.697751   77400 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:30:15.697777   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:30:19.811644   77929 addons.go:505] duration metric: took 2.039289124s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0422 18:30:20.174912   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:20.675213   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.675247   77929 pod_ready.go:81] duration metric: took 2.506921343s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.675261   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681665   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.681690   77929 pod_ready.go:81] duration metric: took 6.421217ms for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681700   77929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687893   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.687926   77929 pod_ready.go:81] duration metric: took 6.218166ms for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687941   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696603   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.696634   77929 pod_ready.go:81] duration metric: took 8.684682ms for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696649   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702776   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.702800   77929 pod_ready.go:81] duration metric: took 6.141484ms for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702813   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073451   77929 pod_ready.go:92] pod "kube-proxy-4m8cm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.073485   77929 pod_ready.go:81] duration metric: took 370.663669ms for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073500   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474144   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.474175   77929 pod_ready.go:81] duration metric: took 400.665802ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474190   77929 pod_ready.go:38] duration metric: took 3.332515716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:21.474207   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:21.474273   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:21.491320   77929 api_server.go:72] duration metric: took 3.719060391s to wait for apiserver process to appear ...
	I0422 18:30:21.491352   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:21.491378   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:30:21.496589   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:30:21.497405   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:21.497426   77929 api_server.go:131] duration metric: took 6.067469ms to wait for apiserver health ...
	I0422 18:30:21.497433   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:21.675885   77929 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:21.675912   77929 system_pods.go:61] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:21.675916   77929 system_pods.go:61] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:21.675924   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:21.675928   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:21.675932   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:21.675935   77929 system_pods.go:61] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:21.675939   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:21.675945   77929 system_pods.go:61] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:21.675949   77929 system_pods.go:61] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:21.675959   77929 system_pods.go:74] duration metric: took 178.519985ms to wait for pod list to return data ...
	I0422 18:30:21.675965   77929 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:21.872358   77929 default_sa.go:45] found service account: "default"
	I0422 18:30:21.872382   77929 default_sa.go:55] duration metric: took 196.412252ms for default service account to be created ...
	I0422 18:30:21.872391   77929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:22.075660   77929 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:22.075689   77929 system_pods.go:89] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:22.075694   77929 system_pods.go:89] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:22.075698   77929 system_pods.go:89] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:22.075702   77929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:22.075706   77929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:22.075710   77929 system_pods.go:89] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:22.075714   77929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:22.075722   77929 system_pods.go:89] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:22.075726   77929 system_pods.go:89] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:22.075735   77929 system_pods.go:126] duration metric: took 203.339608ms to wait for k8s-apps to be running ...
	I0422 18:30:22.075742   77929 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:22.075785   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:22.091186   77929 system_svc.go:56] duration metric: took 15.433207ms WaitForService to wait for kubelet
	I0422 18:30:22.091219   77929 kubeadm.go:576] duration metric: took 4.318966383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:22.091237   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:22.272944   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:22.272971   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:22.272980   77929 node_conditions.go:105] duration metric: took 181.734735ms to run NodePressure ...
	I0422 18:30:22.272991   77929 start.go:240] waiting for startup goroutines ...
	I0422 18:30:22.273000   77929 start.go:245] waiting for cluster config update ...
	I0422 18:30:22.273010   77929 start.go:254] writing updated cluster config ...
	I0422 18:30:22.273248   77929 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:22.323725   77929 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:22.325876   77929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-856422" cluster and "default" namespace by default
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.109960   77400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.41215685s)
	I0422 18:30:48.110037   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:48.127246   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:30:48.138280   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:30:48.148521   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:30:48.148545   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:30:48.148588   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:30:48.160411   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:30:48.160483   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:30:48.170748   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:30:48.180399   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:30:48.180451   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:30:48.192521   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.202200   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:30:48.202274   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.212241   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:30:48.221754   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:30:48.221821   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:30:48.231555   77400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:30:48.456873   77400 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:57.943980   77400 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:30:57.944080   77400 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:30:57.944182   77400 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:30:57.944305   77400 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:30:57.944411   77400 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:30:57.944499   77400 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:30:57.946110   77400 out.go:204]   - Generating certificates and keys ...
	I0422 18:30:57.946192   77400 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:30:57.946262   77400 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:30:57.946385   77400 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:30:57.946464   77400 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:30:57.946559   77400 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:30:57.946683   77400 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:30:57.946772   77400 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:30:57.946835   77400 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:30:57.946902   77400 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:30:57.946963   77400 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:30:57.947000   77400 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:30:57.947054   77400 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:30:57.947116   77400 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:30:57.947201   77400 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:30:57.947283   77400 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:30:57.947383   77400 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:30:57.947458   77400 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:30:57.947589   77400 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:30:57.947662   77400 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:30:57.949092   77400 out.go:204]   - Booting up control plane ...
	I0422 18:30:57.949194   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:30:57.949279   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:30:57.949336   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:30:57.949419   77400 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:30:57.949505   77400 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:30:57.949544   77400 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:30:57.949664   77400 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:30:57.949739   77400 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:30:57.949794   77400 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.588061ms
	I0422 18:30:57.949862   77400 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:30:57.949957   77400 kubeadm.go:309] [api-check] The API server is healthy after 5.510546703s
	I0422 18:30:57.950048   77400 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:57.950152   77400 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:57.950204   77400 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:57.950352   77400 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-407991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:57.950453   77400 kubeadm.go:309] [bootstrap-token] Using token: cwotot.4qmmrydp0nd6w5tq
	I0422 18:30:57.951938   77400 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:57.952040   77400 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:57.952134   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:57.952285   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:57.952410   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:57.952535   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:57.952666   77400 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:57.952799   77400 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:57.952867   77400 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:57.952936   77400 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:57.952952   77400 kubeadm.go:309] 
	I0422 18:30:57.953013   77400 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:57.953019   77400 kubeadm.go:309] 
	I0422 18:30:57.953084   77400 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:57.953090   77400 kubeadm.go:309] 
	I0422 18:30:57.953110   77400 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:57.953199   77400 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:57.953281   77400 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:57.953289   77400 kubeadm.go:309] 
	I0422 18:30:57.953374   77400 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:57.953381   77400 kubeadm.go:309] 
	I0422 18:30:57.953453   77400 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:57.953461   77400 kubeadm.go:309] 
	I0422 18:30:57.953538   77400 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:57.953636   77400 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:57.953719   77400 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:57.953726   77400 kubeadm.go:309] 
	I0422 18:30:57.953813   77400 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:57.953919   77400 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:57.953930   77400 kubeadm.go:309] 
	I0422 18:30:57.954047   77400 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954187   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:57.954222   77400 kubeadm.go:309] 	--control-plane 
	I0422 18:30:57.954232   77400 kubeadm.go:309] 
	I0422 18:30:57.954364   77400 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:57.954374   77400 kubeadm.go:309] 
	I0422 18:30:57.954440   77400 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954553   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:57.954574   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:30:57.954583   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:57.956278   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:57.957592   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:57.970080   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:57.991711   77400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:57.991779   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:57.991780   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-407991 minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=no-preload-407991 minikube.k8s.io/primary=true
	I0422 18:30:58.232025   77400 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:58.232162   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:58.732395   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.232855   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.732187   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.232654   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.732995   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.232856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.732735   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.232474   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.732930   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.232411   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.732457   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.232888   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.732856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.232873   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.733177   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.232682   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.733241   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.232711   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.732922   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.232815   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.732377   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.232576   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.732243   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.232350   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.732764   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.232338   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.357414   77400 kubeadm.go:1107] duration metric: took 13.365692776s to wait for elevateKubeSystemPrivileges
	W0422 18:31:11.357460   77400 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:31:11.357472   77400 kubeadm.go:393] duration metric: took 5m11.48385131s to StartCluster
	I0422 18:31:11.357493   77400 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.357565   77400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:31:11.359176   77400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.359391   77400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:31:11.360948   77400 out.go:177] * Verifying Kubernetes components...
	I0422 18:31:11.359461   77400 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:31:11.359641   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:31:11.362433   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:31:11.362446   77400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-407991"
	I0422 18:31:11.362464   77400 addons.go:69] Setting default-storageclass=true in profile "no-preload-407991"
	I0422 18:31:11.362486   77400 addons.go:69] Setting metrics-server=true in profile "no-preload-407991"
	I0422 18:31:11.362495   77400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-407991"
	I0422 18:31:11.362500   77400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-407991"
	I0422 18:31:11.362515   77400 addons.go:234] Setting addon metrics-server=true in "no-preload-407991"
	W0422 18:31:11.362527   77400 addons.go:243] addon metrics-server should already be in state true
	W0422 18:31:11.362506   77400 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:31:11.362557   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362567   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362929   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362932   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362963   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362971   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362974   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.363144   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.379089   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0422 18:31:11.379582   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.380121   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.380145   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.380496   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.381098   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.381132   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.383229   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 18:31:11.383513   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0422 18:31:11.383642   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.383977   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.384136   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384148   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384552   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.384754   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384770   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384801   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.385103   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.386102   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.386130   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.388554   77400 addons.go:234] Setting addon default-storageclass=true in "no-preload-407991"
	W0422 18:31:11.388569   77400 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:31:11.388589   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.388921   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.388938   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.401669   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0422 18:31:11.402268   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.402852   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.402869   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.403427   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.403610   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.404849   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 18:31:11.405356   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.405588   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.406112   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.406129   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.407696   77400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:31:11.406649   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.409174   77400 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.409195   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:31:11.409214   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.409261   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.411378   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.412836   77400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:31:11.411939   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0422 18:31:11.414011   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:31:11.414027   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:31:11.413155   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.414045   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.414069   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.413487   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.414097   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.413841   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.414686   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.414781   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.414794   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.414871   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.415256   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.415607   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.416288   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.416343   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.417257   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417623   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.417644   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417898   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.418074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.418325   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.418468   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.432218   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0422 18:31:11.432682   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.433096   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.433108   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.433685   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.433887   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.435675   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.435931   77400 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.435952   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:31:11.435969   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.438700   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439107   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.439144   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439237   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.439482   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.439662   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.439833   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.610190   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:31:11.654061   77400 node_ready.go:35] waiting up to 6m0s for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663869   77400 node_ready.go:49] node "no-preload-407991" has status "Ready":"True"
	I0422 18:31:11.663904   77400 node_ready.go:38] duration metric: took 9.806821ms for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663917   77400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:11.673895   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:11.752785   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.770023   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:31:11.770054   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:31:11.799895   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.872083   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:31:11.872113   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:31:11.984597   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:11.984626   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:31:12.059137   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:13.130584   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330646778s)
	I0422 18:31:13.130694   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130718   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.130716   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37789401s)
	I0422 18:31:13.130833   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130847   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131067   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131135   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131159   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131172   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131289   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131304   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131312   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131319   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131327   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.131559   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131574   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131601   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131621   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131621   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.173181   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.173205   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.173478   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.173501   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.279764   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.220585481s)
	I0422 18:31:13.279813   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.279828   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280221   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280241   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280261   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280276   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.280290   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280532   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280570   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280577   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280586   77400 addons.go:470] Verifying addon metrics-server=true in "no-preload-407991"
	I0422 18:31:13.282757   77400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:31:13.284029   77400 addons.go:505] duration metric: took 1.924572004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:31:13.681968   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.682004   77400 pod_ready.go:81] duration metric: took 2.008061657s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.682017   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687240   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.687268   77400 pod_ready.go:81] duration metric: took 5.242949ms for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687281   77400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693047   77400 pod_ready.go:92] pod "etcd-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.693074   77400 pod_ready.go:81] duration metric: took 5.784769ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693086   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705008   77400 pod_ready.go:92] pod "kube-apiserver-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.705028   77400 pod_ready.go:81] duration metric: took 11.934672ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705037   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721814   77400 pod_ready.go:92] pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.721840   77400 pod_ready.go:81] duration metric: took 16.796546ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721855   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079660   77400 pod_ready.go:92] pod "kube-proxy-47g8k" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.079681   77400 pod_ready.go:81] duration metric: took 357.819791ms for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079692   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480000   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.480026   77400 pod_ready.go:81] duration metric: took 400.326493ms for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480037   77400 pod_ready.go:38] duration metric: took 2.816106046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:14.480054   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:31:14.480123   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:31:14.508798   77400 api_server.go:72] duration metric: took 3.149365253s to wait for apiserver process to appear ...
	I0422 18:31:14.508822   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:31:14.508842   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:31:14.523293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:31:14.524410   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:31:14.524439   77400 api_server.go:131] duration metric: took 15.608906ms to wait for apiserver health ...
	I0422 18:31:14.524448   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:31:14.682120   77400 system_pods.go:59] 9 kube-system pods found
	I0422 18:31:14.682152   77400 system_pods.go:61] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:14.682157   77400 system_pods.go:61] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:14.682161   77400 system_pods.go:61] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:14.682164   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:14.682169   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:14.682173   77400 system_pods.go:61] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:14.682178   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:14.682188   77400 system_pods.go:61] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:14.682194   77400 system_pods.go:61] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:14.682205   77400 system_pods.go:74] duration metric: took 157.750249ms to wait for pod list to return data ...
	I0422 18:31:14.682222   77400 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:31:14.878556   77400 default_sa.go:45] found service account: "default"
	I0422 18:31:14.878581   77400 default_sa.go:55] duration metric: took 196.353021ms for default service account to be created ...
	I0422 18:31:14.878590   77400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:31:15.081385   77400 system_pods.go:86] 9 kube-system pods found
	I0422 18:31:15.081415   77400 system_pods.go:89] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:15.081425   77400 system_pods.go:89] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:15.081430   77400 system_pods.go:89] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:15.081434   77400 system_pods.go:89] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:15.081438   77400 system_pods.go:89] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:15.081448   77400 system_pods.go:89] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:15.081452   77400 system_pods.go:89] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:15.081458   77400 system_pods.go:89] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:15.081464   77400 system_pods.go:89] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:15.081476   77400 system_pods.go:126] duration metric: took 202.881032ms to wait for k8s-apps to be running ...
	I0422 18:31:15.081484   77400 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:31:15.081530   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:15.098245   77400 system_svc.go:56] duration metric: took 16.748933ms WaitForService to wait for kubelet
	I0422 18:31:15.098278   77400 kubeadm.go:576] duration metric: took 3.738847086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:31:15.098302   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:31:15.278812   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:31:15.278839   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:31:15.278848   77400 node_conditions.go:105] duration metric: took 180.541553ms to run NodePressure ...
	I0422 18:31:15.278859   77400 start.go:240] waiting for startup goroutines ...
	I0422 18:31:15.278866   77400 start.go:245] waiting for cluster config update ...
	I0422 18:31:15.278875   77400 start.go:254] writing updated cluster config ...
	I0422 18:31:15.279242   77400 ssh_runner.go:195] Run: rm -f paused
	I0422 18:31:15.330788   77400 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:31:15.333274   77400 out.go:177] * Done! kubectl is now configured to use "no-preload-407991" cluster and "default" namespace by default
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.236877197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713810808236858107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48b8f603-d777-4bdd-80ac-7fd77ea1ee02 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.237583955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4cc5d38-b760-4353-925d-3b45c12beb23 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.237631397Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4cc5d38-b760-4353-925d-3b45c12beb23 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.237662282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f4cc5d38-b760-4353-925d-3b45c12beb23 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.271006887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f494284a-1578-4d4e-a6b3-b87989b8c285 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.271093349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f494284a-1578-4d4e-a6b3-b87989b8c285 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.272474967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b93301b-b8c8-4211-ba14-31865471a758 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.272880255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713810808272857134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b93301b-b8c8-4211-ba14-31865471a758 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.273645131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c88a2bc-c743-49db-958d-0e4762081be9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.273713215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c88a2bc-c743-49db-958d-0e4762081be9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.273754973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4c88a2bc-c743-49db-958d-0e4762081be9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.309666080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1e2b9e4-eab3-4ae1-b56d-6e74ce06a554 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.309790936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1e2b9e4-eab3-4ae1-b56d-6e74ce06a554 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.311261546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=691b75d0-65d0-4bc2-b0ee-4f3892568ee8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.311692917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713810808311665230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=691b75d0-65d0-4bc2-b0ee-4f3892568ee8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.312670213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=471b9169-b848-4bba-9538-52e7b3d27d30 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.312744291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=471b9169-b848-4bba-9538-52e7b3d27d30 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.312780459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=471b9169-b848-4bba-9538-52e7b3d27d30 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.348299915Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4083df9c-db15-4f03-9b81-bb208ed06848 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.348494268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4083df9c-db15-4f03-9b81-bb208ed06848 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.350232110Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9bd123a8-f19c-4adc-8d55-0a9d79707b37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.350710491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713810808350672092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bd123a8-f19c-4adc-8d55-0a9d79707b37 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.351331696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28d3d468-66c7-44c4-9e9c-e786edfad722 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.351428790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28d3d468-66c7-44c4-9e9c-e786edfad722 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:33:28 old-k8s-version-367072 crio[648]: time="2024-04-22 18:33:28.351461858Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=28d3d468-66c7-44c4-9e9c-e786edfad722 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr22 18:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054750] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043660] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr22 18:25] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922715] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.744071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.637131] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061682] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.221839] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.164619] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.287340] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +7.158439] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.071484] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.066379] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[ +11.632913] kauditd_printk_skb: 46 callbacks suppressed
	[Apr22 18:29] systemd-fstab-generator[4961]: Ignoring "noauto" option for root device
	[Apr22 18:31] systemd-fstab-generator[5238]: Ignoring "noauto" option for root device
	[  +0.069844] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:33:28 up 8 min,  0 users,  load average: 0.04, 0.10, 0.05
	Linux old-k8s-version-367072 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc0008b3680)
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: goroutine 161 [select]:
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c65ef0, 0x4f0ac20, 0xc0008b70e0, 0x1, 0xc0001000c0)
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e0700, 0xc0001000c0)
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0008b9060, 0xc000b76f20)
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 22 18:33:25 old-k8s-version-367072 kubelet[5419]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 22 18:33:25 old-k8s-version-367072 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 22 18:33:25 old-k8s-version-367072 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 22 18:33:26 old-k8s-version-367072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 22 18:33:26 old-k8s-version-367072 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 22 18:33:26 old-k8s-version-367072 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 22 18:33:26 old-k8s-version-367072 kubelet[5468]: I0422 18:33:26.302318    5468 server.go:416] Version: v1.20.0
	Apr 22 18:33:26 old-k8s-version-367072 kubelet[5468]: I0422 18:33:26.302764    5468 server.go:837] Client rotation is on, will bootstrap in background
	Apr 22 18:33:26 old-k8s-version-367072 kubelet[5468]: I0422 18:33:26.308231    5468 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 22 18:33:26 old-k8s-version-367072 kubelet[5468]: W0422 18:33:26.311219    5468 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 22 18:33:26 old-k8s-version-367072 kubelet[5468]: I0422 18:33:26.311507    5468 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (248.754289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-367072" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (705.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0422 18:30:07.902683   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782377 -n embed-certs-782377
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-22 18:39:02.728152347 +0000 UTC m=+6127.457765960
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-782377 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-782377 logs -n 25: (2.068194994s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo cat                              | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:21:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:21:45.719482   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:48.791433   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:54.871446   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:57.943441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:04.023441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:07.095417   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:13.175430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:16.247522   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:22.327414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:25.399441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:31.479440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:34.551439   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:40.631451   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:43.703447   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:49.783400   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:52.855484   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:58.935464   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:02.007435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:08.087442   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:11.159452   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:17.239435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:20.311430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:26.391420   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:29.463418   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:35.543443   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:38.615421   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:44.695419   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:47.767475   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:53.847471   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:56.919436   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:02.999404   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:06.071458   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:12.151440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:15.223414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:18.227587   77634 start.go:364] duration metric: took 4m29.759611802s to acquireMachinesLock for "embed-certs-782377"
	I0422 18:24:18.227650   77634 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:18.227661   77634 fix.go:54] fixHost starting: 
	I0422 18:24:18.227979   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:18.228013   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:18.243001   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 18:24:18.243415   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:18.243835   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:24:18.243850   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:18.244219   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:18.244384   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:18.244534   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:24:18.246202   77634 fix.go:112] recreateIfNeeded on embed-certs-782377: state=Stopped err=<nil>
	I0422 18:24:18.246228   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	W0422 18:24:18.246399   77634 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:18.248257   77634 out.go:177] * Restarting existing kvm2 VM for "embed-certs-782377" ...
	I0422 18:24:18.249777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Start
	I0422 18:24:18.249966   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring networks are active...
	I0422 18:24:18.250666   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network default is active
	I0422 18:24:18.251036   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network mk-embed-certs-782377 is active
	I0422 18:24:18.251499   77634 main.go:141] libmachine: (embed-certs-782377) Getting domain xml...
	I0422 18:24:18.252150   77634 main.go:141] libmachine: (embed-certs-782377) Creating domain...
	I0422 18:24:18.225125   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:18.225168   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225565   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:24:18.225593   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225781   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:24:18.227460   77400 machine.go:97] duration metric: took 4m37.410379606s to provisionDockerMachine
	I0422 18:24:18.227495   77400 fix.go:56] duration metric: took 4m37.433636251s for fixHost
	I0422 18:24:18.227499   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 4m37.433656207s
	W0422 18:24:18.227517   77400 start.go:713] error starting host: provision: host is not running
	W0422 18:24:18.227584   77400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0422 18:24:18.227593   77400 start.go:728] Will try again in 5 seconds ...
	I0422 18:24:19.442937   77634 main.go:141] libmachine: (embed-certs-782377) Waiting to get IP...
	I0422 18:24:19.444048   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.444425   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.444484   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.444392   78906 retry.go:31] will retry after 283.008432ms: waiting for machine to come up
	I0422 18:24:19.729076   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.729457   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.729493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.729411   78906 retry.go:31] will retry after 252.047573ms: waiting for machine to come up
	I0422 18:24:19.983011   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.983417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.983442   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.983397   78906 retry.go:31] will retry after 300.528755ms: waiting for machine to come up
	I0422 18:24:20.286039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.286467   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.286500   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.286425   78906 retry.go:31] will retry after 426.555496ms: waiting for machine to come up
	I0422 18:24:20.715191   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.715601   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.715638   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.715525   78906 retry.go:31] will retry after 533.433633ms: waiting for machine to come up
	I0422 18:24:21.250151   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:21.250702   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:21.250732   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:21.250646   78906 retry.go:31] will retry after 854.033547ms: waiting for machine to come up
	I0422 18:24:22.106728   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.107083   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.107109   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.107036   78906 retry.go:31] will retry after 761.233698ms: waiting for machine to come up
	I0422 18:24:22.870007   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.870408   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.870435   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.870364   78906 retry.go:31] will retry after 1.121568589s: waiting for machine to come up
	I0422 18:24:23.229316   77400 start.go:360] acquireMachinesLock for no-preload-407991: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:23.993127   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:23.993600   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:23.993623   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:23.993535   78906 retry.go:31] will retry after 1.525222377s: waiting for machine to come up
	I0422 18:24:25.520203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:25.520584   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:25.520609   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:25.520557   78906 retry.go:31] will retry after 1.618927059s: waiting for machine to come up
	I0422 18:24:27.140862   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:27.141363   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:27.141391   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:27.141315   78906 retry.go:31] will retry after 1.828869827s: waiting for machine to come up
	I0422 18:24:28.972053   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:28.972472   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:28.972508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:28.972438   78906 retry.go:31] will retry after 2.456935091s: waiting for machine to come up
	I0422 18:24:31.430825   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:31.431208   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:31.431266   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:31.431181   78906 retry.go:31] will retry after 3.415431602s: waiting for machine to come up
	I0422 18:24:36.144008   77929 start.go:364] duration metric: took 4m11.537292071s to acquireMachinesLock for "default-k8s-diff-port-856422"
	I0422 18:24:36.144073   77929 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:36.144079   77929 fix.go:54] fixHost starting: 
	I0422 18:24:36.144413   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:36.144450   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:36.161253   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0422 18:24:36.161715   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:36.162147   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:24:36.162166   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:36.162536   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:36.162743   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:36.162914   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:24:36.164366   77929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-856422: state=Stopped err=<nil>
	I0422 18:24:36.164397   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	W0422 18:24:36.164563   77929 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:36.166915   77929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-856422" ...
	I0422 18:24:34.847819   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848316   77634 main.go:141] libmachine: (embed-certs-782377) Found IP for machine: 192.168.50.114
	I0422 18:24:34.848339   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has current primary IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848357   77634 main.go:141] libmachine: (embed-certs-782377) Reserving static IP address...
	I0422 18:24:34.848741   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.848769   77634 main.go:141] libmachine: (embed-certs-782377) DBG | skip adding static IP to network mk-embed-certs-782377 - found existing host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"}
	I0422 18:24:34.848782   77634 main.go:141] libmachine: (embed-certs-782377) Reserved static IP address: 192.168.50.114
	I0422 18:24:34.848801   77634 main.go:141] libmachine: (embed-certs-782377) Waiting for SSH to be available...
	I0422 18:24:34.848808   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Getting to WaitForSSH function...
	I0422 18:24:34.850829   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851167   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.851199   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851332   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH client type: external
	I0422 18:24:34.851352   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa (-rw-------)
	I0422 18:24:34.851383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:34.851402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | About to run SSH command:
	I0422 18:24:34.851417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | exit 0
	I0422 18:24:34.975383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:34.975812   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetConfigRaw
	I0422 18:24:34.976602   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:34.979578   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.979959   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.979992   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.980238   77634 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:24:34.980472   77634 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:34.980497   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:34.980777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:34.983493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.983958   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.983999   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.984175   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:34.984372   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984710   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:34.984894   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:34.985074   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:34.985086   77634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:35.099838   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:35.099873   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100144   77634 buildroot.go:166] provisioning hostname "embed-certs-782377"
	I0422 18:24:35.100169   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100381   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.103203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103589   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.103618   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103754   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.103930   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104116   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104262   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.104446   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.104696   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.104720   77634 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-782377 && echo "embed-certs-782377" | sudo tee /etc/hostname
	I0422 18:24:35.223934   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-782377
	
	I0422 18:24:35.223962   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.227033   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227376   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.227413   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.227779   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.227976   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.228140   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.228334   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.228492   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.228508   77634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-782377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-782377/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-782377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:35.346513   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:35.346545   77634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:35.346561   77634 buildroot.go:174] setting up certificates
	I0422 18:24:35.346571   77634 provision.go:84] configureAuth start
	I0422 18:24:35.346598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.346898   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:35.349820   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350164   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.350192   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350301   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.352921   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353288   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.353314   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353488   77634 provision.go:143] copyHostCerts
	I0422 18:24:35.353543   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:35.353552   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:35.353619   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:35.353717   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:35.353725   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:35.353749   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:35.353801   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:35.353810   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:35.353831   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:35.353894   77634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.embed-certs-782377 san=[127.0.0.1 192.168.50.114 embed-certs-782377 localhost minikube]
	I0422 18:24:35.463676   77634 provision.go:177] copyRemoteCerts
	I0422 18:24:35.463733   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:35.463758   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.466567   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.467039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.467415   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.467605   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.467740   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.549947   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:35.576364   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:24:35.601539   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:35.625959   77634 provision.go:87] duration metric: took 279.37435ms to configureAuth
	I0422 18:24:35.625992   77634 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:35.626171   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:35.626235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.629095   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.629533   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629707   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.629934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630077   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630238   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.630365   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.630546   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.630563   77634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:35.906862   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:35.906892   77634 machine.go:97] duration metric: took 926.403466ms to provisionDockerMachine
	I0422 18:24:35.906905   77634 start.go:293] postStartSetup for "embed-certs-782377" (driver="kvm2")
	I0422 18:24:35.906916   77634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:35.906934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:35.907241   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:35.907277   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.910029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.910438   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910599   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.910782   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.910993   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.911168   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.994189   77634 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:35.998376   77634 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:35.998395   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:35.998468   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:35.998545   77634 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:35.998650   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:36.008268   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:36.034031   77634 start.go:296] duration metric: took 127.110389ms for postStartSetup
	I0422 18:24:36.034081   77634 fix.go:56] duration metric: took 17.806421597s for fixHost
	I0422 18:24:36.034100   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.036964   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037357   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.037380   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.037775   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038051   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.038403   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:36.038568   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:36.038579   77634 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:36.143878   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810276.108619822
	
	I0422 18:24:36.143903   77634 fix.go:216] guest clock: 1713810276.108619822
	I0422 18:24:36.143911   77634 fix.go:229] Guest: 2024-04-22 18:24:36.108619822 +0000 UTC Remote: 2024-04-22 18:24:36.034084746 +0000 UTC m=+287.715620683 (delta=74.535076ms)
	I0422 18:24:36.143936   77634 fix.go:200] guest clock delta is within tolerance: 74.535076ms
	I0422 18:24:36.143941   77634 start.go:83] releasing machines lock for "embed-certs-782377", held for 17.916313877s
	I0422 18:24:36.143966   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.144235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:36.146867   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147228   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.147257   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147431   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.147883   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148066   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148171   77634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:36.148218   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.148377   77634 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:36.148403   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.150838   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151150   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151176   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151268   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151296   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.151466   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.151628   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.151671   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151695   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151747   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.151880   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.152055   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.152209   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.152350   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.229109   77634 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:36.266621   77634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:36.421344   77634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:36.427814   77634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:36.427892   77634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:36.448157   77634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:36.448192   77634 start.go:494] detecting cgroup driver to use...
	I0422 18:24:36.448255   77634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:36.468930   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:36.485780   77634 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:36.485856   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:36.502182   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:36.521179   77634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:36.636244   77634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:36.783292   77634 docker.go:233] disabling docker service ...
	I0422 18:24:36.783366   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:36.803014   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:36.817938   77634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:36.957954   77634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:37.085750   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:37.101054   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:37.123504   77634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:37.123555   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.134422   77634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:37.134491   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.145961   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.157192   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.170117   77634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:37.188656   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.205792   77634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.225739   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.236719   77634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:37.246351   77634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:37.246401   77634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:37.261144   77634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:37.271464   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:37.395686   77634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:37.534079   77634 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:37.534156   77634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:37.539212   77634 start.go:562] Will wait 60s for crictl version
	I0422 18:24:37.539285   77634 ssh_runner.go:195] Run: which crictl
	I0422 18:24:37.543239   77634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:37.581460   77634 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:37.581562   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.611743   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.645811   77634 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:37.647247   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:37.650321   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.650811   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:37.650841   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.651055   77634 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:37.655865   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:37.673617   77634 kubeadm.go:877] updating cluster {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:37.673732   77634 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:37.673785   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:37.718534   77634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:37.718609   77634 ssh_runner.go:195] Run: which lz4
	I0422 18:24:37.723369   77634 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:37.728270   77634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:37.728303   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:36.168344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Start
	I0422 18:24:36.168494   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring networks are active...
	I0422 18:24:36.169419   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network default is active
	I0422 18:24:36.169811   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network mk-default-k8s-diff-port-856422 is active
	I0422 18:24:36.170341   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Getting domain xml...
	I0422 18:24:36.171019   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Creating domain...
	I0422 18:24:37.407148   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting to get IP...
	I0422 18:24:37.408083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408430   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408509   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.408416   79040 retry.go:31] will retry after 267.855158ms: waiting for machine to come up
	I0422 18:24:37.677765   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678134   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678168   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.678084   79040 retry.go:31] will retry after 267.61504ms: waiting for machine to come up
	I0422 18:24:37.947737   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948250   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.948216   79040 retry.go:31] will retry after 351.088664ms: waiting for machine to come up
	I0422 18:24:38.300548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301057   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301090   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.301011   79040 retry.go:31] will retry after 560.164848ms: waiting for machine to come up
	I0422 18:24:38.862557   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863114   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.863075   79040 retry.go:31] will retry after 590.286684ms: waiting for machine to come up
	I0422 18:24:39.454925   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455483   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455510   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:39.455428   79040 retry.go:31] will retry after 870.474888ms: waiting for machine to come up
	I0422 18:24:39.338447   77634 crio.go:462] duration metric: took 1.615205556s to copy over tarball
	I0422 18:24:39.338545   77634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:41.640474   77634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301883484s)
	I0422 18:24:41.640514   77634 crio.go:469] duration metric: took 2.302038123s to extract the tarball
	I0422 18:24:41.640524   77634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:24:41.680325   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:41.724755   77634 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:24:41.724777   77634 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:24:41.724785   77634 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.30.0 crio true true} ...
	I0422 18:24:41.724887   77634 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-782377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:24:41.724964   77634 ssh_runner.go:195] Run: crio config
	I0422 18:24:41.772680   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:41.772704   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:41.772715   77634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:24:41.772733   77634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-782377 NodeName:embed-certs-782377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:24:41.772898   77634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-782377"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:24:41.772964   77634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:24:41.783492   77634 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:24:41.783575   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:24:41.793500   77634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0422 18:24:41.810415   77634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:24:41.827504   77634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0422 18:24:41.845704   77634 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0422 18:24:41.849728   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:41.862798   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:41.998260   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:24:42.018779   77634 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377 for IP: 192.168.50.114
	I0422 18:24:42.018801   77634 certs.go:194] generating shared ca certs ...
	I0422 18:24:42.018820   77634 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:24:42.018977   77634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:24:42.019034   77634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:24:42.019048   77634 certs.go:256] generating profile certs ...
	I0422 18:24:42.019146   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/client.key
	I0422 18:24:42.019218   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key.d804c20e
	I0422 18:24:42.019298   77634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key
	I0422 18:24:42.019455   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:24:42.019493   77634 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:24:42.019509   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:24:42.019539   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:24:42.019571   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:24:42.019606   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:24:42.019665   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:42.020460   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:24:42.065297   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:24:42.098581   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:24:42.139751   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:24:42.169770   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:24:42.199958   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:24:42.229298   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:24:42.254517   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:24:42.279390   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:24:42.303872   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:24:42.329704   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:24:42.355108   77634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:24:42.372684   77634 ssh_runner.go:195] Run: openssl version
	I0422 18:24:42.378631   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:24:42.389709   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394492   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394552   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.400346   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:24:42.411335   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:24:42.422568   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427213   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427278   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.433277   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:24:42.444618   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:24:42.455793   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460681   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460739   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.466785   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:24:42.485401   77634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:24:42.491205   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:24:42.498635   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:24:42.510577   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:24:42.517596   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:24:42.524413   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:24:42.530872   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:24:42.537199   77634 kubeadm.go:391] StartCluster: {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:24:42.537319   77634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:24:42.537379   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.579863   77634 cri.go:89] found id: ""
	I0422 18:24:42.579944   77634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:24:42.590756   77634 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:24:42.590781   77634 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:24:42.590788   77634 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:24:42.590844   77634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:24:42.601517   77634 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:24:42.603120   77634 kubeconfig.go:125] found "embed-certs-782377" server: "https://192.168.50.114:8443"
	I0422 18:24:42.606189   77634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:24:42.616881   77634 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0422 18:24:42.616911   77634 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:24:42.616922   77634 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:24:42.616970   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.656829   77634 cri.go:89] found id: ""
	I0422 18:24:42.656923   77634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:24:42.675575   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:24:42.686408   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:24:42.686431   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:24:42.686484   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:24:42.697303   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:24:42.697391   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:24:42.707693   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:24:42.717836   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:24:42.717932   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:24:42.729952   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.740902   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:24:42.740980   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.751946   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:24:42.761758   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:24:42.761830   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:24:42.772699   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:24:42.783018   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:42.891737   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:40.327325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327782   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:40.327726   79040 retry.go:31] will retry after 926.321969ms: waiting for machine to come up
	I0422 18:24:41.255601   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256117   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:41.256072   79040 retry.go:31] will retry after 928.33371ms: waiting for machine to come up
	I0422 18:24:42.186290   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186798   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186826   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:42.186762   79040 retry.go:31] will retry after 1.708117553s: waiting for machine to come up
	I0422 18:24:43.896236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:43.896597   79040 retry.go:31] will retry after 1.720003793s: waiting for machine to come up
	I0422 18:24:44.055395   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163622709s)
	I0422 18:24:44.055429   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.278840   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.351743   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.460115   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:24:44.460202   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:44.960631   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.460588   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.478048   77634 api_server.go:72] duration metric: took 1.017932232s to wait for apiserver process to appear ...
	I0422 18:24:45.478082   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:24:45.478104   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:45.478702   77634 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0422 18:24:45.978527   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.247298   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:24:48.247334   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:24:48.247351   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.295953   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.296005   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.478899   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.488884   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.488920   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.978472   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.992521   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.992552   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:49.479179   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:49.485588   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:24:49.493015   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:24:49.493055   77634 api_server.go:131] duration metric: took 4.01496465s to wait for apiserver health ...
	I0422 18:24:49.493065   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:49.493074   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:49.494997   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:24:45.618240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618714   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618744   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:45.618673   79040 retry.go:31] will retry after 2.396679945s: waiting for machine to come up
	I0422 18:24:48.016812   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017231   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017258   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:48.017197   79040 retry.go:31] will retry after 2.304959564s: waiting for machine to come up
	I0422 18:24:49.496476   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:24:49.516525   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:24:49.541103   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:24:49.552224   77634 system_pods.go:59] 8 kube-system pods found
	I0422 18:24:49.552263   77634 system_pods.go:61] "coredns-7db6d8ff4d-lxcv2" [137ad3db-8bc5-4b7f-8eb0-12a278eba41c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:24:49.552273   77634 system_pods.go:61] "etcd-embed-certs-782377" [85322e31-1ad6-4239-8086-f2a465a28d8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:24:49.552287   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [e791d7d4-a94d-4cce-a50d-4e569350f210] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:24:49.552307   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [cbcc2e7f-7b3a-435b-97d5-5b69b7e399c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:24:49.552317   77634 system_pods.go:61] "kube-proxy-r4249" [7ffb3b8f-53d8-45df-8426-74f0ffb0d20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 18:24:49.552327   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [9568040b-3eca-403e-b078-d6f2071e70c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:24:49.552335   77634 system_pods.go:61] "metrics-server-569cc877fc-d8s5p" [3bcda1df-02f7-4405-95c7-4d8559a0138c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:24:49.552342   77634 system_pods.go:61] "storage-provisioner" [c196d779-346a-4e3f-b1c3-dde4292df017] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:24:49.552351   77634 system_pods.go:74] duration metric: took 11.221599ms to wait for pod list to return data ...
	I0422 18:24:49.552373   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:24:49.556086   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:24:49.556130   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:24:49.556142   77634 node_conditions.go:105] duration metric: took 3.764067ms to run NodePressure ...
	I0422 18:24:49.556161   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:49.852023   77634 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856866   77634 kubeadm.go:733] kubelet initialised
	I0422 18:24:49.856894   77634 kubeadm.go:734] duration metric: took 4.83996ms waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856904   77634 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:24:49.863808   77634 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.868817   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868840   77634 pod_ready.go:81] duration metric: took 5.001181ms for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.868849   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868855   77634 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.873591   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873612   77634 pod_ready.go:81] duration metric: took 4.750292ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.873621   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873627   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.878471   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878494   77634 pod_ready.go:81] duration metric: took 4.859998ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.878503   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878510   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.945869   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945909   77634 pod_ready.go:81] duration metric: took 67.385628ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.945923   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945932   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345633   77634 pod_ready.go:92] pod "kube-proxy-r4249" in "kube-system" namespace has status "Ready":"True"
	I0422 18:24:50.345655   77634 pod_ready.go:81] duration metric: took 399.713725ms for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345666   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:52.352988   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:50.324396   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324920   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324953   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:50.324894   79040 retry.go:31] will retry after 4.018790507s: waiting for machine to come up
	I0422 18:24:54.347584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348046   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Found IP for machine: 192.168.61.206
	I0422 18:24:54.348081   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has current primary IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348094   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserving static IP address...
	I0422 18:24:54.348535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserved static IP address: 192.168.61.206
	I0422 18:24:54.348560   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for SSH to be available...
	I0422 18:24:54.348584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.348624   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | skip adding static IP to network mk-default-k8s-diff-port-856422 - found existing host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"}
	I0422 18:24:54.348640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Getting to WaitForSSH function...
	I0422 18:24:54.351069   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351570   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.351608   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH client type: external
	I0422 18:24:54.351758   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa (-rw-------)
	I0422 18:24:54.351793   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:54.351810   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | About to run SSH command:
	I0422 18:24:54.351834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | exit 0
	I0422 18:24:54.479277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:54.479674   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetConfigRaw
	I0422 18:24:54.480350   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.483089   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.483498   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483801   77929 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:24:54.484031   77929 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:54.484051   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:54.484272   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.486449   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.486857   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486992   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.487178   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487470   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.487635   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.487825   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.487838   77929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:54.603732   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:54.603759   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.603993   77929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-856422"
	I0422 18:24:54.604017   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.604280   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.606938   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607302   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.607331   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607524   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.607693   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.607856   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.608002   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.608174   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.608381   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.608398   77929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-856422 && echo "default-k8s-diff-port-856422" | sudo tee /etc/hostname
	I0422 18:24:54.734622   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-856422
	
	I0422 18:24:54.734646   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.737804   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738109   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.738141   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.738495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738773   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.738950   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.739157   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.739176   77929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-856422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-856422/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-856422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:54.864646   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:54.864679   77929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:54.864732   77929 buildroot.go:174] setting up certificates
	I0422 18:24:54.864745   77929 provision.go:84] configureAuth start
	I0422 18:24:54.864764   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.865059   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.868205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868626   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.868666   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868868   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.871736   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872118   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.872147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872275   77929 provision.go:143] copyHostCerts
	I0422 18:24:54.872340   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:54.872353   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:54.872424   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:54.872545   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:54.872557   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:54.872598   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:54.872676   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:54.872688   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:54.872718   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:54.872794   77929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-856422 san=[127.0.0.1 192.168.61.206 default-k8s-diff-port-856422 localhost minikube]
	I0422 18:24:55.091765   77929 provision.go:177] copyRemoteCerts
	I0422 18:24:55.091820   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:55.091848   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.094572   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.094939   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.094970   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.095209   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.095501   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.095767   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.095958   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.192243   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:55.223313   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 18:24:55.250149   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:55.279442   77929 provision.go:87] duration metric: took 414.679508ms to configureAuth
	I0422 18:24:55.279474   77929 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:55.280056   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:55.280125   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.282806   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.283237   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283405   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.283636   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283803   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283941   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.284109   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.284276   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.284294   77929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:55.565199   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:55.565225   77929 machine.go:97] duration metric: took 1.081180365s to provisionDockerMachine
	I0422 18:24:55.565239   77929 start.go:293] postStartSetup for "default-k8s-diff-port-856422" (driver="kvm2")
	I0422 18:24:55.565282   77929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:55.565312   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.565649   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:55.565682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.568211   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.568614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568809   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.568994   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.569182   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.569352   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.654461   77929 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:55.658992   77929 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:55.659016   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:55.659091   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:55.659199   77929 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:55.659309   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:55.669183   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:55.694953   77929 start.go:296] duration metric: took 129.698973ms for postStartSetup
	I0422 18:24:55.694998   77929 fix.go:56] duration metric: took 19.550918724s for fixHost
	I0422 18:24:55.695021   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.697596   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.697926   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.697958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.698133   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.698325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698479   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698579   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.698680   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.698897   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.698914   77929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:55.812106   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810295.778892948
	
	I0422 18:24:55.812132   77929 fix.go:216] guest clock: 1713810295.778892948
	I0422 18:24:55.812143   77929 fix.go:229] Guest: 2024-04-22 18:24:55.778892948 +0000 UTC Remote: 2024-04-22 18:24:55.69500303 +0000 UTC m=+271.245786903 (delta=83.889918ms)
	I0422 18:24:55.812168   77929 fix.go:200] guest clock delta is within tolerance: 83.889918ms
	I0422 18:24:55.812176   77929 start.go:83] releasing machines lock for "default-k8s-diff-port-856422", held for 19.668119564s
	I0422 18:24:55.812213   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.812500   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:55.815404   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.815786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.815828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.816036   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816526   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816698   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816781   77929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:55.816823   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.817092   77929 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:55.817116   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.819495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819710   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819931   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.819958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820045   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.820181   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820217   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820362   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820366   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820631   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.820716   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820845   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.904810   77929 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:55.937093   77929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:56.089389   77929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:56.096144   77929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:56.096208   77929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:56.118194   77929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:56.118224   77929 start.go:494] detecting cgroup driver to use...
	I0422 18:24:56.118292   77929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:56.134918   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:56.154107   77929 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:56.154180   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:56.168971   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:56.188793   77929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:56.310223   77929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:56.492316   77929 docker.go:233] disabling docker service ...
	I0422 18:24:56.492430   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:56.515169   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:56.529734   77929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:56.670628   77929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:56.810823   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:56.826785   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:56.847682   77929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:56.847741   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.860499   77929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:56.860576   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.872086   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.883347   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.901596   77929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:56.916912   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.928121   77929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.947335   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.958431   77929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:56.968077   77929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:56.968131   77929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:56.982135   77929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:56.991801   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:57.125635   77929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:57.263889   77929 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:57.263973   77929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:57.269573   77929 start.go:562] Will wait 60s for crictl version
	I0422 18:24:57.269627   77929 ssh_runner.go:195] Run: which crictl
	I0422 18:24:57.273613   77929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:57.314357   77929 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:57.314463   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.345062   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.380868   77929 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:54.353338   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:56.853757   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:57.382284   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:57.385215   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:57.385655   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385889   77929 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:57.390482   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:57.405644   77929 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:57.405766   77929 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:57.405868   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:57.452528   77929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:57.452604   77929 ssh_runner.go:195] Run: which lz4
	I0422 18:24:57.456903   77929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:57.461373   77929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:57.461411   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:59.060426   77929 crio.go:462] duration metric: took 1.603560712s to copy over tarball
	I0422 18:24:59.060532   77929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:58.854112   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:00.856147   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:01.563849   77929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503290941s)
	I0422 18:25:01.563881   77929 crio.go:469] duration metric: took 2.503413712s to extract the tarball
	I0422 18:25:01.563891   77929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:01.603330   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:01.649885   77929 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:25:01.649909   77929 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:25:01.649916   77929 kubeadm.go:928] updating node { 192.168.61.206 8444 v1.30.0 crio true true} ...
	I0422 18:25:01.650053   77929 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-856422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:01.650143   77929 ssh_runner.go:195] Run: crio config
	I0422 18:25:01.698892   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:01.698915   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:01.698929   77929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:01.698948   77929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.206 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-856422 NodeName:default-k8s-diff-port-856422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:01.699075   77929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.206
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-856422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:01.699150   77929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:01.709830   77929 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:01.709903   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:01.720447   77929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0422 18:25:01.738745   77929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:01.756420   77929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0422 18:25:01.775364   77929 ssh_runner.go:195] Run: grep 192.168.61.206	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:01.779476   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:01.792860   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:01.920607   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:01.939637   77929 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422 for IP: 192.168.61.206
	I0422 18:25:01.939658   77929 certs.go:194] generating shared ca certs ...
	I0422 18:25:01.939675   77929 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:01.939858   77929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:01.939911   77929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:01.939922   77929 certs.go:256] generating profile certs ...
	I0422 18:25:01.940026   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/client.key
	I0422 18:25:01.940105   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key.e8400874
	I0422 18:25:01.940170   77929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key
	I0422 18:25:01.940320   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:01.940386   77929 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:01.940400   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:01.940437   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:01.940474   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:01.940506   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:01.940603   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:01.941408   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:01.981392   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:02.020335   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:02.057221   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:02.088571   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 18:25:02.123716   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:02.153926   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:02.183499   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:02.212438   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:02.238650   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:02.265786   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:02.295001   77929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:02.315343   77929 ssh_runner.go:195] Run: openssl version
	I0422 18:25:02.322001   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:02.334785   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340619   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340686   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.348942   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:02.364960   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:02.381460   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386720   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386794   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.392894   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:02.404951   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:02.417334   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423503   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423573   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.430512   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:02.444132   77929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:02.449749   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:02.456667   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:02.463700   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:02.470474   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:02.477324   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:02.483900   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:02.490614   77929 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:02.490719   77929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:02.490768   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.538766   77929 cri.go:89] found id: ""
	I0422 18:25:02.538849   77929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:02.549686   77929 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:02.549711   77929 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:02.549717   77929 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:02.549794   77929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:02.560594   77929 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:02.561584   77929 kubeconfig.go:125] found "default-k8s-diff-port-856422" server: "https://192.168.61.206:8444"
	I0422 18:25:02.563656   77929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:02.575462   77929 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.206
	I0422 18:25:02.575507   77929 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:02.575522   77929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:02.575606   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.628012   77929 cri.go:89] found id: ""
	I0422 18:25:02.628080   77929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:02.645405   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:02.656723   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:02.656751   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:02.656814   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:25:02.667202   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:02.667269   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:02.678303   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:25:02.688600   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:02.688690   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:02.699963   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.710329   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:02.710393   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.721188   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:25:02.731964   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:02.732040   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:02.743541   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:02.755030   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:02.870301   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:03.995375   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125032803s)
	I0422 18:25:03.995447   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.230252   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.302979   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.395038   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:04.395115   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:03.502023   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.353882   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:04.353905   77634 pod_ready.go:81] duration metric: took 14.00823208s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:04.353915   77634 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:06.363356   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:08.363954   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.896176   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.396048   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.440071   77929 api_server.go:72] duration metric: took 1.045032787s to wait for apiserver process to appear ...
	I0422 18:25:05.440103   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:25:05.440148   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.759542   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.759577   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.759592   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.793255   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.793294   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.940652   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.945611   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:08.945646   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:09.440292   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.464743   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.464770   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:09.940470   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.946872   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.946900   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:10.441291   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:10.445834   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:25:10.452788   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:25:10.452814   77929 api_server.go:131] duration metric: took 5.012704724s to wait for apiserver health ...
	I0422 18:25:10.452823   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:10.452828   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:10.454695   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:25:10.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:13.361234   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:10.456234   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:25:10.469460   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:25:10.510297   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:25:10.527988   77929 system_pods.go:59] 8 kube-system pods found
	I0422 18:25:10.528034   77929 system_pods.go:61] "coredns-7db6d8ff4d-w968m" [1372c3d4-cb23-4f33-911b-57876688fcd4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:25:10.528044   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [af6c3f45-494d-469b-95e0-3d0842d07a70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:25:10.528051   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [665925b4-3073-41c2-86c0-12186f079459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:25:10.528057   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [e8661b67-89c5-43a6-b66e-828f637942e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:25:10.528061   77929 system_pods.go:61] "kube-proxy-4xvx2" [0e662ebe-1f6f-48fe-86c7-595b0bfa4bb6] Running
	I0422 18:25:10.528066   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [e6101593-2ee5-4765-b129-33b3ed7d4c98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:25:10.528075   77929 system_pods.go:61] "metrics-server-569cc877fc-l5qqw" [85eab808-f1f0-4fbc-9c54-1ae307226243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:25:10.528079   77929 system_pods.go:61] "storage-provisioner" [ba8465de-babc-4496-809f-68f6ec917ce8] Running
	I0422 18:25:10.528095   77929 system_pods.go:74] duration metric: took 17.768241ms to wait for pod list to return data ...
	I0422 18:25:10.528104   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:25:10.539169   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:25:10.539202   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:25:10.539214   77929 node_conditions.go:105] duration metric: took 11.105847ms to run NodePressure ...
	I0422 18:25:10.539237   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:10.808687   77929 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:25:10.815993   77929 kubeadm.go:733] kubelet initialised
	I0422 18:25:10.816025   77929 kubeadm.go:734] duration metric: took 7.302574ms waiting for restarted kubelet to initialise ...
	I0422 18:25:10.816037   77929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:25:10.824257   77929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:12.837255   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:16.704684   77400 start.go:364] duration metric: took 53.475319353s to acquireMachinesLock for "no-preload-407991"
	I0422 18:25:16.704741   77400 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:25:16.704752   77400 fix.go:54] fixHost starting: 
	I0422 18:25:16.705132   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:25:16.705166   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:25:16.721711   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0422 18:25:16.722127   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:25:16.722671   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:25:16.722693   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:25:16.723022   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:25:16.723220   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:16.723426   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:25:16.725197   77400 fix.go:112] recreateIfNeeded on no-preload-407991: state=Stopped err=<nil>
	I0422 18:25:16.725231   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	W0422 18:25:16.725430   77400 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:25:16.727275   77400 out.go:177] * Restarting existing kvm2 VM for "no-preload-407991" ...
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:15.362667   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:17.861622   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:14.831683   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:14.831706   77929 pod_ready.go:81] duration metric: took 4.007420508s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:14.831715   77929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343025   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:16.343056   77929 pod_ready.go:81] duration metric: took 1.511333532s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343070   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351244   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:17.351267   77929 pod_ready.go:81] duration metric: took 1.008189798s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351280   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:19.365025   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:16.728745   77400 main.go:141] libmachine: (no-preload-407991) Calling .Start
	I0422 18:25:16.728946   77400 main.go:141] libmachine: (no-preload-407991) Ensuring networks are active...
	I0422 18:25:16.729604   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network default is active
	I0422 18:25:16.729979   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network mk-no-preload-407991 is active
	I0422 18:25:16.730458   77400 main.go:141] libmachine: (no-preload-407991) Getting domain xml...
	I0422 18:25:16.731314   77400 main.go:141] libmachine: (no-preload-407991) Creating domain...
	I0422 18:25:18.079763   77400 main.go:141] libmachine: (no-preload-407991) Waiting to get IP...
	I0422 18:25:18.080862   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.081371   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.081401   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.081340   79353 retry.go:31] will retry after 226.494122ms: waiting for machine to come up
	I0422 18:25:18.309499   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.309914   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.310019   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.309900   79353 retry.go:31] will retry after 375.374338ms: waiting for machine to come up
	I0422 18:25:18.686507   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.687064   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.687093   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.687018   79353 retry.go:31] will retry after 341.714326ms: waiting for machine to come up
	I0422 18:25:19.030772   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.031261   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.031290   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.031229   79353 retry.go:31] will retry after 388.101939ms: waiting for machine to come up
	I0422 18:25:19.420994   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.421478   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.421500   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.421397   79353 retry.go:31] will retry after 732.485222ms: waiting for machine to come up
	I0422 18:25:20.155887   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:20.156717   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:20.156750   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:20.156665   79353 retry.go:31] will retry after 950.207106ms: waiting for machine to come up
	I0422 18:25:19.878966   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.364111   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:21.859384   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.362519   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.362552   77929 pod_ready.go:81] duration metric: took 5.011264858s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.362566   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371087   77929 pod_ready.go:92] pod "kube-proxy-4xvx2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.371112   77929 pod_ready.go:81] duration metric: took 8.534129ms for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371142   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376156   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.376183   77929 pod_ready.go:81] duration metric: took 5.03143ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376196   77929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:24.385435   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:21.108444   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:21.109056   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:21.109082   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:21.109004   79353 retry.go:31] will retry after 958.250136ms: waiting for machine to come up
	I0422 18:25:22.069541   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:22.070120   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:22.070144   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:22.070036   79353 retry.go:31] will retry after 989.607679ms: waiting for machine to come up
	I0422 18:25:23.061351   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:23.061877   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:23.061908   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:23.061823   79353 retry.go:31] will retry after 1.451989455s: waiting for machine to come up
	I0422 18:25:24.515233   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:24.515730   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:24.515755   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:24.515686   79353 retry.go:31] will retry after 2.303903602s: waiting for machine to come up
	I0422 18:25:24.365508   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.861066   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.389132   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:28.883625   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:26.821539   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:26.822006   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:26.822033   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:26.821950   79353 retry.go:31] will retry after 1.870697225s: waiting for machine to come up
	I0422 18:25:28.695072   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:28.695420   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:28.695466   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:28.695386   79353 retry.go:31] will retry after 2.327485176s: waiting for machine to come up
	I0422 18:25:28.861976   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:31.361339   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.883801   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:33.389422   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.024382   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:31.024817   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:31.024845   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:31.024786   79353 retry.go:31] will retry after 2.767538103s: waiting for machine to come up
	I0422 18:25:33.794390   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:33.794834   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:33.794872   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:33.794808   79353 retry.go:31] will retry after 5.661373675s: waiting for machine to come up
	I0422 18:25:33.860276   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.861770   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:38.361316   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.883098   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:37.883749   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.457864   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458407   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has current primary IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458447   77400 main.go:141] libmachine: (no-preload-407991) Found IP for machine: 192.168.39.164
	I0422 18:25:39.458492   77400 main.go:141] libmachine: (no-preload-407991) Reserving static IP address...
	I0422 18:25:39.458954   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.458980   77400 main.go:141] libmachine: (no-preload-407991) DBG | skip adding static IP to network mk-no-preload-407991 - found existing host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"}
	I0422 18:25:39.458992   77400 main.go:141] libmachine: (no-preload-407991) Reserved static IP address: 192.168.39.164
	I0422 18:25:39.459012   77400 main.go:141] libmachine: (no-preload-407991) Waiting for SSH to be available...
	I0422 18:25:39.459027   77400 main.go:141] libmachine: (no-preload-407991) DBG | Getting to WaitForSSH function...
	I0422 18:25:39.461404   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461715   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.461746   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461875   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH client type: external
	I0422 18:25:39.461906   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa (-rw-------)
	I0422 18:25:39.461956   77400 main.go:141] libmachine: (no-preload-407991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:39.461974   77400 main.go:141] libmachine: (no-preload-407991) DBG | About to run SSH command:
	I0422 18:25:39.461992   77400 main.go:141] libmachine: (no-preload-407991) DBG | exit 0
	I0422 18:25:39.591446   77400 main.go:141] libmachine: (no-preload-407991) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:39.591795   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetConfigRaw
	I0422 18:25:39.592473   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.594928   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595379   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.595414   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595632   77400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:25:39.595890   77400 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:39.595914   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:39.596103   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.598532   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.598899   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.598929   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.599071   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.599270   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599450   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599592   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.599728   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.599927   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.599942   77400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:39.712043   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:39.712081   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712336   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:25:39.712363   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712548   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.715474   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.715936   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.715960   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.716089   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.716265   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716396   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716530   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.716656   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.716860   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.716874   77400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407991 && echo "no-preload-407991" | sudo tee /etc/hostname
	I0422 18:25:39.845921   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407991
	
	I0422 18:25:39.845959   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.848790   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849093   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.849121   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849288   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.849495   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849638   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849817   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.850014   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.850183   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.850200   77400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407991/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:39.977389   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:39.977427   77400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:39.977447   77400 buildroot.go:174] setting up certificates
	I0422 18:25:39.977456   77400 provision.go:84] configureAuth start
	I0422 18:25:39.977468   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.977754   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.980800   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981266   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.981305   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981458   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.984031   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984478   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.984510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984654   77400 provision.go:143] copyHostCerts
	I0422 18:25:39.984713   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:39.984725   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:39.984788   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:39.984907   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:39.984918   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:39.984952   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:39.985038   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:39.985048   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:39.985076   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:39.985158   77400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.no-preload-407991 san=[127.0.0.1 192.168.39.164 localhost minikube no-preload-407991]
	I0422 18:25:40.224235   77400 provision.go:177] copyRemoteCerts
	I0422 18:25:40.224306   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:40.224352   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.227355   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.227814   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.227842   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.228035   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.228232   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.228392   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.228560   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.318916   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:40.346168   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:40.371490   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:25:40.396866   77400 provision.go:87] duration metric: took 419.381117ms to configureAuth
	I0422 18:25:40.396899   77400 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:40.397067   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:25:40.397130   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.399642   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400060   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.400095   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400269   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.400466   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400652   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400832   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.401018   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.401176   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.401191   77400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:40.698107   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:40.698140   77400 machine.go:97] duration metric: took 1.102235221s to provisionDockerMachine
	I0422 18:25:40.698153   77400 start.go:293] postStartSetup for "no-preload-407991" (driver="kvm2")
	I0422 18:25:40.698171   77400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:40.698187   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.698497   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:40.698532   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.701545   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.701933   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.701964   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.702070   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.702295   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.702492   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.702727   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.800538   77400 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:40.805027   77400 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:40.805060   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:40.805133   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:40.805216   77400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:40.805304   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:40.816872   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:40.843857   77400 start.go:296] duration metric: took 145.69044ms for postStartSetup
	I0422 18:25:40.843896   77400 fix.go:56] duration metric: took 24.13914409s for fixHost
	I0422 18:25:40.843914   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.846770   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847148   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.847184   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847391   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.847605   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847778   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847966   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.848199   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.848382   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.848396   77400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:40.964440   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810340.939149386
	
	I0422 18:25:40.964473   77400 fix.go:216] guest clock: 1713810340.939149386
	I0422 18:25:40.964483   77400 fix.go:229] Guest: 2024-04-22 18:25:40.939149386 +0000 UTC Remote: 2024-04-22 18:25:40.843899302 +0000 UTC m=+360.205454093 (delta=95.250084ms)
	I0422 18:25:40.964508   77400 fix.go:200] guest clock delta is within tolerance: 95.250084ms
	I0422 18:25:40.964513   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 24.259798286s
	I0422 18:25:40.964535   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.964813   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:40.967510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.967906   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.967932   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.968087   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968610   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968782   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968866   77400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:40.968910   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.969047   77400 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:40.969074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.971818   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972039   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972190   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972203   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972394   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972565   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972580   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972594   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972733   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972791   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.972875   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972948   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.973062   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.973206   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:41.092004   77400 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:41.098574   77400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:41.242800   77400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:41.250454   77400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:41.250521   77400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:41.267380   77400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:41.267408   77400 start.go:494] detecting cgroup driver to use...
	I0422 18:25:41.267478   77400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:41.284742   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:41.299527   77400 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:41.299596   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:41.314189   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:41.329444   77400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:41.456719   77400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:41.628305   77400 docker.go:233] disabling docker service ...
	I0422 18:25:41.628376   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:41.643226   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:41.657578   77400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:41.780449   77400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:41.898823   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:41.913578   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:41.933621   77400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:25:41.933679   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.944309   77400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:41.944382   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.955308   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.966445   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.977509   77400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:41.989479   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.001915   77400 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.020554   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.033225   77400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:42.044177   77400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:42.044231   77400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:42.060403   77400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:42.071760   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:42.213747   77400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:42.361818   77400 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:42.361911   77400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:42.367211   77400 start.go:562] Will wait 60s for crictl version
	I0422 18:25:42.367265   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.371042   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:42.408686   77400 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:42.408773   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.438447   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.469117   77400 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:25:40.862849   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.361826   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:39.884361   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:41.885199   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.885865   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.470665   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:42.473467   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.473845   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:42.473871   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.474121   77400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:42.478401   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:42.491034   77400 kubeadm.go:877] updating cluster {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:42.491163   77400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:25:42.491203   77400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:42.530418   77400 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:25:42.530443   77400 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.530585   77400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.530641   77400 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0422 18:25:42.530601   77400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.530609   77400 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.530622   77400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.530626   77400 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532108   77400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.532136   77400 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0422 18:25:42.532111   77400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.532113   77400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.532175   77400 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532197   77400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.532223   77400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.532506   77400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.735366   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.750777   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0422 18:25:42.758260   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.759633   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.763447   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.765416   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.803799   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.832904   77400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0422 18:25:42.832959   77400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.833021   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981471   77400 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0422 18:25:42.981528   77400 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.981553   77400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0422 18:25:42.981584   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981592   77400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.981635   77400 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0422 18:25:42.981663   77400 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.981687   77400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0422 18:25:42.981699   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981642   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981716   77400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.981770   77400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0422 18:25:42.981776   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981788   77400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.981820   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981846   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:43.021364   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0422 18:25:43.021416   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:43.021455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.021460   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:43.021529   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:43.021534   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:43.021585   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:43.130300   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0422 18:25:43.130373   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0422 18:25:43.130408   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:43.130425   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0422 18:25:43.130455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:43.130514   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:43.134769   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0422 18:25:43.134785   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0422 18:25:43.134797   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134839   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134853   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:43.134882   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0422 18:25:43.134959   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:43.142273   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0422 18:25:43.142486   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0422 18:25:43.142837   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0422 18:25:43.840108   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210614   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.075740127s)
	I0422 18:25:45.210650   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0422 18:25:45.210655   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.075789371s)
	I0422 18:25:45.210676   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0422 18:25:45.210693   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.075715404s)
	I0422 18:25:45.210699   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210706   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0422 18:25:45.210748   77400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.370610047s)
	I0422 18:25:45.210785   77400 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0422 18:25:45.210750   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210842   77400 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210969   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:45.363082   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:47.861802   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:46.383938   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:48.385209   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.203063   77400 ssh_runner.go:235] Completed: which crictl: (2.992066474s)
	I0422 18:25:48.203106   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.992228832s)
	I0422 18:25:48.203143   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0422 18:25:48.203159   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:48.203171   77400 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:48.203210   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:49.863963   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:52.370507   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.883608   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:53.386229   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.419429   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.216195193s)
	I0422 18:25:52.419462   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0422 18:25:52.419474   77400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.216288559s)
	I0422 18:25:52.419488   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419513   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0422 18:25:52.419537   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419581   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:52.424638   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0422 18:25:53.873720   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.454157304s)
	I0422 18:25:53.873750   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0422 18:25:53.873780   77400 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:53.873825   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:54.860810   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:56.864272   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.388103   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:57.887970   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.955181   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.081308071s)
	I0422 18:25:55.955210   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0422 18:25:55.955236   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:55.955300   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:58.218734   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.263410883s)
	I0422 18:25:58.218762   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0422 18:25:58.218792   77400 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:58.218843   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:59.071398   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0422 18:25:59.071443   77400 cache_images.go:123] Successfully loaded all cached images
	I0422 18:25:59.071450   77400 cache_images.go:92] duration metric: took 16.54097573s to LoadCachedImages
	I0422 18:25:59.071463   77400 kubeadm.go:928] updating node { 192.168.39.164 8443 v1.30.0 crio true true} ...
	I0422 18:25:59.071610   77400 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-407991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:59.071698   77400 ssh_runner.go:195] Run: crio config
	I0422 18:25:59.125757   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:25:59.125783   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:59.125800   77400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:59.125832   77400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-407991 NodeName:no-preload-407991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:59.126001   77400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-407991"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:59.126073   77400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:59.137254   77400 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:59.137320   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:59.146983   77400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0422 18:25:59.165207   77400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:59.182898   77400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0422 18:25:59.201735   77400 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:59.206108   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:59.219642   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:59.336565   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:59.356844   77400 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991 for IP: 192.168.39.164
	I0422 18:25:59.356873   77400 certs.go:194] generating shared ca certs ...
	I0422 18:25:59.356893   77400 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:59.357058   77400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:59.357121   77400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:59.357133   77400 certs.go:256] generating profile certs ...
	I0422 18:25:59.357209   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.key
	I0422 18:25:59.357329   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key.6aa1268b
	I0422 18:25:59.357413   77400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key
	I0422 18:25:59.357574   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:59.357616   77400 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:59.357631   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:59.357672   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:59.357707   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:59.357745   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:59.357823   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:59.358765   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:59.395982   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:59.430445   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:59.465415   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:59.502678   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:25:59.538225   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:25:59.570635   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:59.596096   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:59.622051   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:59.647372   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:59.673650   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:59.699515   77400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:59.717253   77400 ssh_runner.go:195] Run: openssl version
	I0422 18:25:59.723704   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:59.735265   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740264   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740319   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.746445   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:59.757879   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:59.769243   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774505   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774562   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.780572   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:59.793472   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:59.805187   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810148   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810191   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.816350   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:59.828208   77400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:59.832799   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:59.838952   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:59.845145   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:59.851309   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:59.857643   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:59.864892   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:59.873625   77400 kubeadm.go:391] StartCluster: {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:59.873749   77400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:59.873826   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.913578   77400 cri.go:89] found id: ""
	I0422 18:25:59.913656   77400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:59.925105   77400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:59.925131   77400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:59.925138   77400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:59.925192   77400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:59.935942   77400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:59.937363   77400 kubeconfig.go:125] found "no-preload-407991" server: "https://192.168.39.164:8443"
	I0422 18:25:59.939672   77400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:59.949774   77400 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.164
	I0422 18:25:59.949810   77400 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:59.949841   77400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:59.949896   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.989385   77400 cri.go:89] found id: ""
	I0422 18:25:59.989443   77400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:26:00.005985   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:26:00.016873   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:26:00.016897   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:26:00.016953   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:26:00.027119   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:26:00.027205   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:26:00.038360   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:26:00.048176   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:26:00.048246   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:26:00.058861   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.068955   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:26:00.069018   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.079147   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:26:00.089400   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:26:00.089477   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:26:00.100245   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:26:00.111040   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:00.224436   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:59.362215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:01.860196   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.388433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:02.883211   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.838456   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.057201   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.143346   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.294896   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:26:01.295031   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.795945   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.296085   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.324434   77400 api_server.go:72] duration metric: took 1.029539423s to wait for apiserver process to appear ...
	I0422 18:26:02.324467   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:26:02.324490   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.784948   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:26:04.784984   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:26:04.784997   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.844019   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.844064   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:04.844084   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.848805   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.848838   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.325458   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.332351   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.332410   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.824785   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.830293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.830318   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:06.325380   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:06.332804   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:26:06.344083   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:26:06.344110   77400 api_server.go:131] duration metric: took 4.019636154s to wait for apiserver health ...
	I0422 18:26:06.344118   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:26:06.344123   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:26:06.345875   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:26:03.863020   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:06.360428   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:04.884648   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:07.382356   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:09.388391   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.347812   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:26:06.361087   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:26:06.385654   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:26:06.398331   77400 system_pods.go:59] 8 kube-system pods found
	I0422 18:26:06.398372   77400 system_pods.go:61] "coredns-7db6d8ff4d-2p2sr" [3f42ce46-e76d-4bc8-9dd5-463a08948e4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:26:06.398384   77400 system_pods.go:61] "etcd-no-preload-407991" [96ae7feb-802f-44a8-81fc-5ea5de12e73b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:26:06.398396   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [28010e33-49a1-4c6b-90f9-939ede3ed97e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:26:06.398404   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [1e7db029-2196-499f-bc88-d780d065f80c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:26:06.398415   77400 system_pods.go:61] "kube-proxy-767q4" [1c6d01b0-caf0-4d52-8da8-caad7b158012] Running
	I0422 18:26:06.398426   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [3ef8d145-d90e-455d-98fe-de9e6080a178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:26:06.398433   77400 system_pods.go:61] "metrics-server-569cc877fc-jmjhm" [d831b01b-af2e-4c7f-944c-e768d724ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:26:06.398439   77400 system_pods.go:61] "storage-provisioner" [db8196df-a394-4e10-9db7-c10414833af3] Running
	I0422 18:26:06.398447   77400 system_pods.go:74] duration metric: took 12.770066ms to wait for pod list to return data ...
	I0422 18:26:06.398455   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:26:06.402125   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:26:06.402158   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:26:06.402170   77400 node_conditions.go:105] duration metric: took 3.709194ms to run NodePressure ...
	I0422 18:26:06.402195   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:06.676133   77400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680247   77400 kubeadm.go:733] kubelet initialised
	I0422 18:26:06.680269   77400 kubeadm.go:734] duration metric: took 4.114413ms waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680276   77400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:26:06.687275   77400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.693967   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.693986   77400 pod_ready.go:81] duration metric: took 6.687466ms for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.694004   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.694012   77400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.698539   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698562   77400 pod_ready.go:81] duration metric: took 4.539271ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.698571   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698578   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.703382   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703407   77400 pod_ready.go:81] duration metric: took 4.822601ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.703418   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703425   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.789413   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789449   77400 pod_ready.go:81] duration metric: took 86.014056ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.789459   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789465   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189544   77400 pod_ready.go:92] pod "kube-proxy-767q4" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:07.189572   77400 pod_ready.go:81] duration metric: took 400.096716ms for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189585   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:09.201757   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:08.861714   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.359820   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.362303   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.883726   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:14.382966   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.697196   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.697458   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.861378   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:17.861808   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:16.385523   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:18.883000   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.697079   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:15.697104   77400 pod_ready.go:81] duration metric: took 8.507511233s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:15.697116   77400 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:17.704095   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.204276   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.360946   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:22.861202   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.883107   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:23.383119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.204697   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.703926   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.861797   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.361089   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.384161   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.386172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:27.204128   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.207057   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.364913   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.883085   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.883536   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.883927   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:31.704123   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:34.206443   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.863261   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.360525   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.361132   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.385507   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.882649   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:36.704473   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:39.204839   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.863837   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.362000   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.887004   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.384351   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:41.704418   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.704878   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.362073   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.860279   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.385285   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.882420   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:45.706474   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:48.205681   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.860748   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.360586   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.884553   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:51.885527   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.387502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:50.703624   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.704794   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.204187   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.861314   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:57.360430   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:56.882974   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:58.884770   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:57.703613   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:00.205449   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:59.861376   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.862613   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.386313   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:03.883728   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:02.205610   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.704158   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.359865   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:06.359968   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.360935   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:05.884072   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.386493   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:07.203802   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:09.204754   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.862854   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.361215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.883779   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:12.883989   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.205525   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.706785   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:15.861009   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.861214   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.884371   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.384911   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:16.204000   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:18.704622   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.864836   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:22.360984   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.883393   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:21.885743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.384476   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.704821   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:23.203548   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:25.204601   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.361665   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.361708   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.388979   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.882505   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:27.714190   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.204420   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.861728   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:31.360750   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.360969   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.883051   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.386563   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:32.704424   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:34.705215   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.861184   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.361048   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.883602   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.383601   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:37.203082   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.204563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.860739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.861544   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.385271   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.389273   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:41.705615   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.203947   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.361656   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:47.860929   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.882684   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:46.886119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:49.382017   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:46.703083   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:48.705413   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:50.361465   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:52.364509   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.384561   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.882657   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.203623   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.205834   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.863919   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.360796   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:56.383743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:58.383783   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.703110   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.704061   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.704421   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.361229   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:01.362273   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.385750   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:02.886349   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:02.203877   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:04.204896   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:03.860506   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.860713   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.384180   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.884714   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:06.206560   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.703406   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.364348   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.861760   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.361127   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.392330   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:12.883081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:10.705202   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.203760   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.205898   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.361423   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:17.363341   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:14.883352   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.886433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.382478   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:17.703917   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.704958   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.861537   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.862949   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.882158   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:23.885280   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.204356   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.703765   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.361251   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:26.862825   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.886291   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:28.382905   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.705590   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.205641   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.361439   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:31.361534   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:30.383502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.887006   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:31.704145   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.704841   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.860769   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.860833   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.861583   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.382871   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.383164   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:36.204210   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:38.205076   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:40.360574   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.862692   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:39.882407   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.882939   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:43.883102   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:40.703806   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.704426   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.707600   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.361890   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.860686   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.883300   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.884863   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:47.207153   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.704440   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.863696   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.360739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.384172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.386468   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.704563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.206032   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.362075   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.861096   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.883667   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:57.386081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:56.709043   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.203924   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:58.861932   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.360398   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.360867   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.882692   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.883267   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.383872   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:01.204827   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.204916   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.355065   77634 pod_ready.go:81] duration metric: took 4m0.0011361s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:04.355113   77634 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:04.355148   77634 pod_ready.go:38] duration metric: took 4m14.498231749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:04.355180   77634 kubeadm.go:591] duration metric: took 4m21.764385121s to restartPrimaryControlPlane
	W0422 18:29:04.355236   77634 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:04.355261   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:06.385395   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:08.883604   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:05.703491   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:07.704534   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.705814   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:11.383569   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:13.883521   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:11.706608   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:14.204779   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:16.384284   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.884060   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.704652   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.704899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:21.391438   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:22.376750   77929 pod_ready.go:81] duration metric: took 4m0.000534542s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:22.376787   77929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:22.376811   77929 pod_ready.go:38] duration metric: took 4m11.560762914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:22.376844   77929 kubeadm.go:591] duration metric: took 4m19.827120959s to restartPrimaryControlPlane
	W0422 18:29:22.376929   77929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:22.376953   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.203699   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:23.204704   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:25.206832   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:27.706178   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:30.203899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:29:32.204107   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:34.707228   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:36.541281   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.185998541s)
	I0422 18:29:36.541367   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:36.558729   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:36.569635   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:36.579901   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:36.579919   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:36.579959   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:36.589540   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:36.589602   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:36.600704   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:36.610945   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:36.611012   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:36.621316   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.631251   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:36.631305   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.641661   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:36.650970   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:36.651049   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:36.661012   77634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:36.717676   77634 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:36.717771   77634 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:36.861264   77634 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:36.861404   77634 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:36.861534   77634 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:37.083032   77634 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:37.084958   77634 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:37.085069   77634 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:37.085179   77634 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:37.085296   77634 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:37.085387   77634 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:37.085505   77634 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:37.085579   77634 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:37.085665   77634 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:37.085748   77634 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:37.085869   77634 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:37.085985   77634 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:37.086037   77634 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:37.086114   77634 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:37.337747   77634 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:37.538036   77634 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:37.630303   77634 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:37.755713   77634 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:38.081451   77634 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:38.082265   77634 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:38.084958   77634 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:38.086755   77634 out.go:204]   - Booting up control plane ...
	I0422 18:29:38.086893   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:38.087023   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:38.089714   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:38.108313   77634 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:38.108786   77634 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:38.108849   77634 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:38.241537   77634 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:38.241681   77634 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:37.203550   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:39.205619   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:38.743798   77634 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.847818ms
	I0422 18:29:38.743910   77634 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:44.245440   77634 kubeadm.go:309] [api-check] The API server is healthy after 5.501913498s
	I0422 18:29:44.265283   77634 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:29:44.280940   77634 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:29:44.318688   77634 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:29:44.318990   77634 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-782377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:29:44.332201   77634 kubeadm.go:309] [bootstrap-token] Using token: o52gh5.f6sjmkidroy1sl61
	I0422 18:29:44.333546   77634 out.go:204]   - Configuring RBAC rules ...
	I0422 18:29:44.333670   77634 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:29:44.342847   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:29:44.350983   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:29:44.354214   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:29:44.361351   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:29:44.365170   77634 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:29:44.654414   77634 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:29:45.170247   77634 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:29:45.654714   77634 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:29:45.654744   77634 kubeadm.go:309] 
	I0422 18:29:45.654847   77634 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:29:45.654871   77634 kubeadm.go:309] 
	I0422 18:29:45.654984   77634 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:29:45.654996   77634 kubeadm.go:309] 
	I0422 18:29:45.655028   77634 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:29:45.655108   77634 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:29:45.655201   77634 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:29:45.655211   77634 kubeadm.go:309] 
	I0422 18:29:45.655308   77634 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:29:45.655317   77634 kubeadm.go:309] 
	I0422 18:29:45.655395   77634 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:29:45.655414   77634 kubeadm.go:309] 
	I0422 18:29:45.655486   77634 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:29:45.655597   77634 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:29:45.655700   77634 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:29:45.655714   77634 kubeadm.go:309] 
	I0422 18:29:45.655824   77634 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:29:45.655951   77634 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:29:45.655963   77634 kubeadm.go:309] 
	I0422 18:29:45.656067   77634 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656226   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:29:45.656258   77634 kubeadm.go:309] 	--control-plane 
	I0422 18:29:45.656265   77634 kubeadm.go:309] 
	I0422 18:29:45.656383   77634 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:29:45.656394   77634 kubeadm.go:309] 
	I0422 18:29:45.656513   77634 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656602   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:29:45.657124   77634 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:29:45.657152   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:29:45.657168   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:29:45.658873   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:29:41.705450   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:44.205661   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:45.660184   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:29:45.671834   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:29:45.693947   77634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:29:45.694034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:45.694054   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-782377 minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=embed-certs-782377 minikube.k8s.io/primary=true
	I0422 18:29:45.901437   77634 ops.go:34] apiserver oom_adj: -16
	I0422 18:29:45.901443   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.402050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.902222   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.402527   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.901535   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.206598   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.703899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.401738   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:48.902497   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.402046   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.901756   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.402023   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.901600   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.401905   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.901739   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.401859   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.902155   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.661872   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.28489375s)
	I0422 18:29:54.661952   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:54.679790   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:54.689947   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:54.700173   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:54.700191   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:54.700230   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:29:54.711462   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:54.711519   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:54.721157   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:29:54.730698   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:54.730769   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:54.740596   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.750450   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:54.750521   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.760582   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:29:54.770551   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:54.770608   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:54.781181   77929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:54.834872   77929 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:54.834950   77929 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:54.982435   77929 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:54.982574   77929 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:54.982675   77929 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:55.208724   77929 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:50.704498   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:53.203270   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.206485   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.210946   77929 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:55.211072   77929 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:55.211180   77929 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:55.211326   77929 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:55.211425   77929 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:55.211546   77929 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:55.211655   77929 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:55.211746   77929 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:55.211831   77929 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:55.211932   77929 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:55.212028   77929 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:55.212076   77929 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:55.212150   77929 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:55.456090   77929 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:55.747103   77929 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:55.940962   77929 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:56.076850   77929 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:56.253326   77929 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:56.253921   77929 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:56.259311   77929 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:53.402196   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:53.902328   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.402353   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.901736   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.401514   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.902415   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.402371   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.902117   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.401817   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.902050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.402034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.574005   77634 kubeadm.go:1107] duration metric: took 12.880033802s to wait for elevateKubeSystemPrivileges
	W0422 18:29:58.574051   77634 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:29:58.574061   77634 kubeadm.go:393] duration metric: took 5m16.036878933s to StartCluster
	I0422 18:29:58.574083   77634 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.574173   77634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:29:58.576621   77634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.576908   77634 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:29:58.578444   77634 out.go:177] * Verifying Kubernetes components...
	I0422 18:29:58.576967   77634 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:29:58.577120   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:29:58.579836   77634 addons.go:69] Setting default-storageclass=true in profile "embed-certs-782377"
	I0422 18:29:58.579846   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:29:58.579850   77634 addons.go:69] Setting metrics-server=true in profile "embed-certs-782377"
	I0422 18:29:58.579873   77634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-782377"
	I0422 18:29:58.579896   77634 addons.go:234] Setting addon metrics-server=true in "embed-certs-782377"
	W0422 18:29:58.579910   77634 addons.go:243] addon metrics-server should already be in state true
	I0422 18:29:58.579952   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.579841   77634 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-782377"
	I0422 18:29:58.580057   77634 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-782377"
	W0422 18:29:58.580070   77634 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:29:58.580099   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.580279   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580284   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580301   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580308   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580460   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580488   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.603276   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0422 18:29:58.603459   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0422 18:29:58.603483   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 18:29:58.607248   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607265   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607392   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607836   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.607853   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.607983   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.608001   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.608344   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608373   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608505   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.608932   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.608963   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612034   77634 addons.go:234] Setting addon default-storageclass=true in "embed-certs-782377"
	W0422 18:29:58.612056   77634 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:29:58.612084   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.612467   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.612485   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612786   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.612802   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.613185   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.613700   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.613728   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.630170   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0422 18:29:58.630586   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.631061   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.631081   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.631523   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.631693   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.631847   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0422 18:29:58.632457   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.632941   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.632966   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.633179   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0422 18:29:58.633322   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.633567   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.633688   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.635830   77634 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:29:58.633856   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.634354   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.636961   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.637004   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:29:58.637027   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:29:58.637045   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.637006   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.637294   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.637508   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.639287   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.640999   77634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:29:58.640236   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:56.261447   77929 out.go:204]   - Booting up control plane ...
	I0422 18:29:56.261539   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:56.261635   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:56.261736   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:56.285519   77929 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:56.285675   77929 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:56.285752   77929 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:56.437635   77929 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:56.437767   77929 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:56.944001   77929 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 506.500244ms
	I0422 18:29:56.944104   77929 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:58.640741   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.642428   77634 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.641034   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.642448   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:29:58.642456   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.642470   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.642590   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.642733   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.642860   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.645684   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646424   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.646469   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646728   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.646929   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.647079   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.647331   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.657385   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 18:29:58.658062   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.658658   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.658676   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.659065   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.659314   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.661001   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.661274   77634 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:58.661292   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:29:58.661309   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.664551   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.665029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665185   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.665397   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.665560   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.665692   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.840086   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:29:58.872963   77634 node_ready.go:35] waiting up to 6m0s for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882942   77634 node_ready.go:49] node "embed-certs-782377" has status "Ready":"True"
	I0422 18:29:58.882978   77634 node_ready.go:38] duration metric: took 9.978929ms for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882990   77634 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:58.892484   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:29:58.964679   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.987690   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:59.001748   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:29:59.001776   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:29:59.095009   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:29:59.095039   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:29:59.242427   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.242451   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:29:59.321464   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.989825   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025095721s)
	I0422 18:29:59.989883   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.989895   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.989828   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.002098611s)
	I0422 18:29:59.989974   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990005   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990193   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990231   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990239   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990247   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990254   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990306   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990341   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990355   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990369   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990380   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990504   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990523   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990572   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990588   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.025628   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.025655   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.025970   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.025991   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.434245   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.434287   77634 pod_ready.go:81] duration metric: took 1.54176792s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.434301   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454521   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.454545   77634 pod_ready.go:81] duration metric: took 20.235494ms for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454557   77634 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.473166   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.151631277s)
	I0422 18:30:00.473225   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473266   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473625   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.473660   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.473683   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.473706   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473719   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473998   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.474079   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.474098   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.474114   77634 addons.go:470] Verifying addon metrics-server=true in "embed-certs-782377"
	I0422 18:30:00.476224   77634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:29:57.706757   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.206098   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.477945   77634 addons.go:505] duration metric: took 1.900979481s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:30:00.493925   77634 pod_ready.go:92] pod "etcd-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.493956   77634 pod_ready.go:81] duration metric: took 39.391277ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.493971   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502733   77634 pod_ready.go:92] pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.502762   77634 pod_ready.go:81] duration metric: took 8.782315ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502776   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517227   77634 pod_ready.go:92] pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.517249   77634 pod_ready.go:81] duration metric: took 14.465418ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517260   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881221   77634 pod_ready.go:92] pod "kube-proxy-6qsdm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.881245   77634 pod_ready.go:81] duration metric: took 363.979231ms for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881254   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277017   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:01.277103   77634 pod_ready.go:81] duration metric: took 395.840808ms for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277125   77634 pod_ready.go:38] duration metric: took 2.394112246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:01.277153   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:01.277240   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:01.295278   77634 api_server.go:72] duration metric: took 2.718332063s to wait for apiserver process to appear ...
	I0422 18:30:01.295316   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:01.295345   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:30:01.299754   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:30:01.300888   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:01.300912   77634 api_server.go:131] duration metric: took 5.588825ms to wait for apiserver health ...
	I0422 18:30:01.300920   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:01.480184   77634 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:01.480216   77634 system_pods.go:61] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.480220   77634 system_pods.go:61] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.480224   77634 system_pods.go:61] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.480227   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.480231   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.480234   77634 system_pods.go:61] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.480237   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.480243   77634 system_pods.go:61] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.480246   77634 system_pods.go:61] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.480253   77634 system_pods.go:74] duration metric: took 179.327678ms to wait for pod list to return data ...
	I0422 18:30:01.480260   77634 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:01.676749   77634 default_sa.go:45] found service account: "default"
	I0422 18:30:01.676792   77634 default_sa.go:55] duration metric: took 196.525393ms for default service account to be created ...
	I0422 18:30:01.676805   77634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:01.881811   77634 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:01.881846   77634 system_pods.go:89] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.881852   77634 system_pods.go:89] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.881856   77634 system_pods.go:89] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.881861   77634 system_pods.go:89] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.881866   77634 system_pods.go:89] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.881871   77634 system_pods.go:89] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.881875   77634 system_pods.go:89] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.881884   77634 system_pods.go:89] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.881891   77634 system_pods.go:89] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.881902   77634 system_pods.go:126] duration metric: took 205.08856ms to wait for k8s-apps to be running ...
	I0422 18:30:01.881915   77634 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:01.881971   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:01.898653   77634 system_svc.go:56] duration metric: took 16.727076ms WaitForService to wait for kubelet
	I0422 18:30:01.898688   77634 kubeadm.go:576] duration metric: took 3.321747224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:01.898716   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:02.079527   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:02.079552   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:02.079567   77634 node_conditions.go:105] duration metric: took 180.844523ms to run NodePressure ...
	I0422 18:30:02.079581   77634 start.go:240] waiting for startup goroutines ...
	I0422 18:30:02.079590   77634 start.go:245] waiting for cluster config update ...
	I0422 18:30:02.079603   77634 start.go:254] writing updated cluster config ...
	I0422 18:30:02.079881   77634 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:02.131965   77634 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:02.133816   77634 out.go:177] * Done! kubectl is now configured to use "embed-certs-782377" cluster and "default" namespace by default
	I0422 18:30:02.446649   77929 kubeadm.go:309] [api-check] The API server is healthy after 5.502662802s
	I0422 18:30:02.466311   77929 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:02.504029   77929 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:02.586946   77929 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:02.587250   77929 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-856422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:02.600362   77929 kubeadm.go:309] [bootstrap-token] Using token: f03yx2.2vmzf4rav70vm6gm
	I0422 18:30:02.601830   77929 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:02.601961   77929 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:02.608688   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:02.621264   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:02.625695   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:02.630424   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:02.639203   77929 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:02.856167   77929 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:03.309505   77929 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:03.855419   77929 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:03.855443   77929 kubeadm.go:309] 
	I0422 18:30:03.855541   77929 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:03.855567   77929 kubeadm.go:309] 
	I0422 18:30:03.855643   77929 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:03.855653   77929 kubeadm.go:309] 
	I0422 18:30:03.855688   77929 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:03.855756   77929 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:03.855841   77929 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:03.855854   77929 kubeadm.go:309] 
	I0422 18:30:03.855909   77929 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:03.855915   77929 kubeadm.go:309] 
	I0422 18:30:03.855954   77929 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:03.855960   77929 kubeadm.go:309] 
	I0422 18:30:03.856051   77929 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:03.856171   77929 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:03.856248   77929 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:03.856259   77929 kubeadm.go:309] 
	I0422 18:30:03.856390   77929 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:03.856484   77929 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:03.856496   77929 kubeadm.go:309] 
	I0422 18:30:03.856636   77929 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.856729   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:03.856749   77929 kubeadm.go:309] 	--control-plane 
	I0422 18:30:03.856755   77929 kubeadm.go:309] 
	I0422 18:30:03.856823   77929 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:03.856829   77929 kubeadm.go:309] 
	I0422 18:30:03.856911   77929 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.857040   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:03.857540   77929 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:03.857569   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:30:03.857583   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:03.859350   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:03.860736   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:03.873189   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:03.897193   77929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:03.897260   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:03.897317   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-856422 minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=default-k8s-diff-port-856422 minikube.k8s.io/primary=true
	I0422 18:30:04.114339   77929 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:04.114499   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:02.703452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.705502   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.615355   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.115530   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.614776   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.114991   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.614772   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.114921   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.614799   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.115218   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.614688   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:09.114578   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.203762   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.704636   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.615201   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.115526   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.614511   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.115041   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.615220   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.115463   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.614937   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.115470   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.615417   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:14.114916   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:11.706452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.203931   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.614582   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.115466   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.615542   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.115554   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.614586   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.114645   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.614945   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.769793   77929 kubeadm.go:1107] duration metric: took 13.872592974s to wait for elevateKubeSystemPrivileges
	W0422 18:30:17.769857   77929 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:30:17.769869   77929 kubeadm.go:393] duration metric: took 5m15.279261637s to StartCluster
	I0422 18:30:17.769889   77929 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.769958   77929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:30:17.771921   77929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.772222   77929 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:30:17.774219   77929 out.go:177] * Verifying Kubernetes components...
	I0422 18:30:17.772365   77929 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:30:17.772496   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:30:17.776231   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:30:17.776249   77929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776267   77929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776294   77929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776307   77929 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:30:17.776321   77929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-856422"
	I0422 18:30:17.776343   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776284   77929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776412   77929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776430   77929 addons.go:243] addon metrics-server should already be in state true
	I0422 18:30:17.776469   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776775   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776809   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776778   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776846   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776777   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776926   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.796665   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0422 18:30:17.796701   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0422 18:30:17.796976   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0422 18:30:17.797083   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797472   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797609   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797795   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.797824   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798111   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798141   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798158   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798499   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798627   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798648   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798728   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.798776   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799001   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.799077   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.799107   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799274   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.803095   77929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.803141   77929 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:30:17.803175   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.803544   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.803580   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.820753   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0422 18:30:17.821272   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.821822   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.821839   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.822247   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.822315   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0422 18:30:17.822640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.823287   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0422 18:30:17.823373   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.823976   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.824141   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824152   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824479   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824498   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824561   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.824727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.825176   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.825646   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.825675   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.826014   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.828122   77929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:30:17.826808   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.829694   77929 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:17.829711   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:30:17.829729   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.831322   77929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:30:17.834942   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:30:17.834959   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:30:17.834979   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.833531   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.832894   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835054   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.835078   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.835468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.835674   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.837838   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838180   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.838204   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838459   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.838656   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.838827   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.838983   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.844804   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0422 18:30:17.845252   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.845762   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.845780   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.846071   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.846240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.847881   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.848127   77929 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:17.848142   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:30:17.848159   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.850959   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851369   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.851389   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.851786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.851918   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.852081   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.997608   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:30:18.066476   77929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.139937   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:18.141619   77929 node_ready.go:49] node "default-k8s-diff-port-856422" has status "Ready":"True"
	I0422 18:30:18.141645   77929 node_ready.go:38] duration metric: took 75.13675ms for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.141658   77929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:18.168289   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:18.217351   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:30:18.217374   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:30:18.280089   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:18.283704   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:30:18.283734   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:30:18.314907   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.314936   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:30:18.379950   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.595931   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.595969   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596350   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596374   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.596389   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596660   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596699   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596722   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610244   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.610277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.610614   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.610635   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610659   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:19.513892   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233747961s)
	I0422 18:30:19.513948   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.513961   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514326   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.514460   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.514491   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.514506   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514414   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517601   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.517617   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.805551   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425552646s)
	I0422 18:30:19.805610   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.805621   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.805986   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.806040   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.806064   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.806083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.807818   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.807865   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.807874   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.807889   77929 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-856422"
	I0422 18:30:19.809871   77929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0422 18:30:15.697614   77400 pod_ready.go:81] duration metric: took 4m0.000479463s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	E0422 18:30:15.697661   77400 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:30:15.697678   77400 pod_ready.go:38] duration metric: took 4m9.017394523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:15.697704   77400 kubeadm.go:591] duration metric: took 4m15.772560858s to restartPrimaryControlPlane
	W0422 18:30:15.697751   77400 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:30:15.697777   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:30:19.811644   77929 addons.go:505] duration metric: took 2.039289124s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0422 18:30:20.174912   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:20.675213   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.675247   77929 pod_ready.go:81] duration metric: took 2.506921343s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.675261   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681665   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.681690   77929 pod_ready.go:81] duration metric: took 6.421217ms for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681700   77929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687893   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.687926   77929 pod_ready.go:81] duration metric: took 6.218166ms for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687941   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696603   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.696634   77929 pod_ready.go:81] duration metric: took 8.684682ms for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696649   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702776   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.702800   77929 pod_ready.go:81] duration metric: took 6.141484ms for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702813   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073451   77929 pod_ready.go:92] pod "kube-proxy-4m8cm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.073485   77929 pod_ready.go:81] duration metric: took 370.663669ms for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073500   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474144   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.474175   77929 pod_ready.go:81] duration metric: took 400.665802ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474190   77929 pod_ready.go:38] duration metric: took 3.332515716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:21.474207   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:21.474273   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:21.491320   77929 api_server.go:72] duration metric: took 3.719060391s to wait for apiserver process to appear ...
	I0422 18:30:21.491352   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:21.491378   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:30:21.496589   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:30:21.497405   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:21.497426   77929 api_server.go:131] duration metric: took 6.067469ms to wait for apiserver health ...
	I0422 18:30:21.497433   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:21.675885   77929 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:21.675912   77929 system_pods.go:61] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:21.675916   77929 system_pods.go:61] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:21.675924   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:21.675928   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:21.675932   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:21.675935   77929 system_pods.go:61] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:21.675939   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:21.675945   77929 system_pods.go:61] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:21.675949   77929 system_pods.go:61] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:21.675959   77929 system_pods.go:74] duration metric: took 178.519985ms to wait for pod list to return data ...
	I0422 18:30:21.675965   77929 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:21.872358   77929 default_sa.go:45] found service account: "default"
	I0422 18:30:21.872382   77929 default_sa.go:55] duration metric: took 196.412252ms for default service account to be created ...
	I0422 18:30:21.872391   77929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:22.075660   77929 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:22.075689   77929 system_pods.go:89] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:22.075694   77929 system_pods.go:89] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:22.075698   77929 system_pods.go:89] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:22.075702   77929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:22.075706   77929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:22.075710   77929 system_pods.go:89] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:22.075714   77929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:22.075722   77929 system_pods.go:89] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:22.075726   77929 system_pods.go:89] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:22.075735   77929 system_pods.go:126] duration metric: took 203.339608ms to wait for k8s-apps to be running ...
	I0422 18:30:22.075742   77929 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:22.075785   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:22.091186   77929 system_svc.go:56] duration metric: took 15.433207ms WaitForService to wait for kubelet
	I0422 18:30:22.091219   77929 kubeadm.go:576] duration metric: took 4.318966383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:22.091237   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:22.272944   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:22.272971   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:22.272980   77929 node_conditions.go:105] duration metric: took 181.734735ms to run NodePressure ...
	I0422 18:30:22.272991   77929 start.go:240] waiting for startup goroutines ...
	I0422 18:30:22.273000   77929 start.go:245] waiting for cluster config update ...
	I0422 18:30:22.273010   77929 start.go:254] writing updated cluster config ...
	I0422 18:30:22.273248   77929 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:22.323725   77929 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:22.325876   77929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-856422" cluster and "default" namespace by default
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.109960   77400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.41215685s)
	I0422 18:30:48.110037   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:48.127246   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:30:48.138280   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:30:48.148521   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:30:48.148545   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:30:48.148588   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:30:48.160411   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:30:48.160483   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:30:48.170748   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:30:48.180399   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:30:48.180451   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:30:48.192521   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.202200   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:30:48.202274   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.212241   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:30:48.221754   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:30:48.221821   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:30:48.231555   77400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:30:48.456873   77400 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:57.943980   77400 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:30:57.944080   77400 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:30:57.944182   77400 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:30:57.944305   77400 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:30:57.944411   77400 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:30:57.944499   77400 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:30:57.946110   77400 out.go:204]   - Generating certificates and keys ...
	I0422 18:30:57.946192   77400 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:30:57.946262   77400 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:30:57.946385   77400 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:30:57.946464   77400 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:30:57.946559   77400 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:30:57.946683   77400 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:30:57.946772   77400 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:30:57.946835   77400 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:30:57.946902   77400 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:30:57.946963   77400 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:30:57.947000   77400 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:30:57.947054   77400 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:30:57.947116   77400 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:30:57.947201   77400 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:30:57.947283   77400 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:30:57.947383   77400 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:30:57.947458   77400 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:30:57.947589   77400 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:30:57.947662   77400 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:30:57.949092   77400 out.go:204]   - Booting up control plane ...
	I0422 18:30:57.949194   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:30:57.949279   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:30:57.949336   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:30:57.949419   77400 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:30:57.949505   77400 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:30:57.949544   77400 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:30:57.949664   77400 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:30:57.949739   77400 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:30:57.949794   77400 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.588061ms
	I0422 18:30:57.949862   77400 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:30:57.949957   77400 kubeadm.go:309] [api-check] The API server is healthy after 5.510546703s
	I0422 18:30:57.950048   77400 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:57.950152   77400 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:57.950204   77400 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:57.950352   77400 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-407991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:57.950453   77400 kubeadm.go:309] [bootstrap-token] Using token: cwotot.4qmmrydp0nd6w5tq
	I0422 18:30:57.951938   77400 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:57.952040   77400 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:57.952134   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:57.952285   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:57.952410   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:57.952535   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:57.952666   77400 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:57.952799   77400 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:57.952867   77400 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:57.952936   77400 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:57.952952   77400 kubeadm.go:309] 
	I0422 18:30:57.953013   77400 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:57.953019   77400 kubeadm.go:309] 
	I0422 18:30:57.953084   77400 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:57.953090   77400 kubeadm.go:309] 
	I0422 18:30:57.953110   77400 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:57.953199   77400 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:57.953281   77400 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:57.953289   77400 kubeadm.go:309] 
	I0422 18:30:57.953374   77400 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:57.953381   77400 kubeadm.go:309] 
	I0422 18:30:57.953453   77400 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:57.953461   77400 kubeadm.go:309] 
	I0422 18:30:57.953538   77400 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:57.953636   77400 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:57.953719   77400 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:57.953726   77400 kubeadm.go:309] 
	I0422 18:30:57.953813   77400 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:57.953919   77400 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:57.953930   77400 kubeadm.go:309] 
	I0422 18:30:57.954047   77400 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954187   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:57.954222   77400 kubeadm.go:309] 	--control-plane 
	I0422 18:30:57.954232   77400 kubeadm.go:309] 
	I0422 18:30:57.954364   77400 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:57.954374   77400 kubeadm.go:309] 
	I0422 18:30:57.954440   77400 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954553   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:57.954574   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:30:57.954583   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:57.956278   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:57.957592   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:57.970080   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:57.991711   77400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:57.991779   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:57.991780   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-407991 minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=no-preload-407991 minikube.k8s.io/primary=true
	I0422 18:30:58.232025   77400 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:58.232162   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:58.732395   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.232855   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.732187   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.232654   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.732995   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.232856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.732735   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.232474   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.732930   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.232411   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.732457   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.232888   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.732856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.232873   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.733177   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.232682   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.733241   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.232711   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.732922   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.232815   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.732377   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.232576   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.732243   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.232350   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.732764   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.232338   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.357414   77400 kubeadm.go:1107] duration metric: took 13.365692776s to wait for elevateKubeSystemPrivileges
	W0422 18:31:11.357460   77400 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:31:11.357472   77400 kubeadm.go:393] duration metric: took 5m11.48385131s to StartCluster
	I0422 18:31:11.357493   77400 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.357565   77400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:31:11.359176   77400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.359391   77400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:31:11.360948   77400 out.go:177] * Verifying Kubernetes components...
	I0422 18:31:11.359461   77400 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:31:11.359641   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:31:11.362433   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:31:11.362446   77400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-407991"
	I0422 18:31:11.362464   77400 addons.go:69] Setting default-storageclass=true in profile "no-preload-407991"
	I0422 18:31:11.362486   77400 addons.go:69] Setting metrics-server=true in profile "no-preload-407991"
	I0422 18:31:11.362495   77400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-407991"
	I0422 18:31:11.362500   77400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-407991"
	I0422 18:31:11.362515   77400 addons.go:234] Setting addon metrics-server=true in "no-preload-407991"
	W0422 18:31:11.362527   77400 addons.go:243] addon metrics-server should already be in state true
	W0422 18:31:11.362506   77400 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:31:11.362557   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362567   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362929   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362932   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362963   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362971   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362974   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.363144   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.379089   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0422 18:31:11.379582   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.380121   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.380145   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.380496   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.381098   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.381132   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.383229   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 18:31:11.383513   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0422 18:31:11.383642   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.383977   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.384136   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384148   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384552   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.384754   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384770   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384801   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.385103   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.386102   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.386130   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.388554   77400 addons.go:234] Setting addon default-storageclass=true in "no-preload-407991"
	W0422 18:31:11.388569   77400 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:31:11.388589   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.388921   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.388938   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.401669   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0422 18:31:11.402268   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.402852   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.402869   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.403427   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.403610   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.404849   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 18:31:11.405356   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.405588   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.406112   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.406129   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.407696   77400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:31:11.406649   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.409174   77400 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.409195   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:31:11.409214   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.409261   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.411378   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.412836   77400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:31:11.411939   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0422 18:31:11.414011   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:31:11.414027   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:31:11.413155   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.414045   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.414069   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.413487   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.414097   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.413841   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.414686   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.414781   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.414794   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.414871   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.415256   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.415607   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.416288   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.416343   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.417257   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417623   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.417644   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417898   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.418074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.418325   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.418468   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.432218   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0422 18:31:11.432682   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.433096   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.433108   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.433685   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.433887   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.435675   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.435931   77400 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.435952   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:31:11.435969   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.438700   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439107   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.439144   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439237   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.439482   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.439662   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.439833   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.610190   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:31:11.654061   77400 node_ready.go:35] waiting up to 6m0s for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663869   77400 node_ready.go:49] node "no-preload-407991" has status "Ready":"True"
	I0422 18:31:11.663904   77400 node_ready.go:38] duration metric: took 9.806821ms for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663917   77400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:11.673895   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:11.752785   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.770023   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:31:11.770054   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:31:11.799895   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.872083   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:31:11.872113   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:31:11.984597   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:11.984626   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:31:12.059137   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:13.130584   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330646778s)
	I0422 18:31:13.130694   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130718   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.130716   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37789401s)
	I0422 18:31:13.130833   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130847   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131067   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131135   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131159   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131172   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131289   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131304   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131312   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131319   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131327   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.131559   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131574   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131601   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131621   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131621   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.173181   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.173205   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.173478   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.173501   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.279764   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.220585481s)
	I0422 18:31:13.279813   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.279828   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280221   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280241   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280261   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280276   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.280290   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280532   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280570   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280577   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280586   77400 addons.go:470] Verifying addon metrics-server=true in "no-preload-407991"
	I0422 18:31:13.282757   77400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:31:13.284029   77400 addons.go:505] duration metric: took 1.924572004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:31:13.681968   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.682004   77400 pod_ready.go:81] duration metric: took 2.008061657s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.682017   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687240   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.687268   77400 pod_ready.go:81] duration metric: took 5.242949ms for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687281   77400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693047   77400 pod_ready.go:92] pod "etcd-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.693074   77400 pod_ready.go:81] duration metric: took 5.784769ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693086   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705008   77400 pod_ready.go:92] pod "kube-apiserver-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.705028   77400 pod_ready.go:81] duration metric: took 11.934672ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705037   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721814   77400 pod_ready.go:92] pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.721840   77400 pod_ready.go:81] duration metric: took 16.796546ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721855   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079660   77400 pod_ready.go:92] pod "kube-proxy-47g8k" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.079681   77400 pod_ready.go:81] duration metric: took 357.819791ms for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079692   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480000   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.480026   77400 pod_ready.go:81] duration metric: took 400.326493ms for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480037   77400 pod_ready.go:38] duration metric: took 2.816106046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:14.480054   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:31:14.480123   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:31:14.508798   77400 api_server.go:72] duration metric: took 3.149365253s to wait for apiserver process to appear ...
	I0422 18:31:14.508822   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:31:14.508842   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:31:14.523293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:31:14.524410   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:31:14.524439   77400 api_server.go:131] duration metric: took 15.608906ms to wait for apiserver health ...
	I0422 18:31:14.524448   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:31:14.682120   77400 system_pods.go:59] 9 kube-system pods found
	I0422 18:31:14.682152   77400 system_pods.go:61] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:14.682157   77400 system_pods.go:61] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:14.682161   77400 system_pods.go:61] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:14.682164   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:14.682169   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:14.682173   77400 system_pods.go:61] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:14.682178   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:14.682188   77400 system_pods.go:61] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:14.682194   77400 system_pods.go:61] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:14.682205   77400 system_pods.go:74] duration metric: took 157.750249ms to wait for pod list to return data ...
	I0422 18:31:14.682222   77400 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:31:14.878556   77400 default_sa.go:45] found service account: "default"
	I0422 18:31:14.878581   77400 default_sa.go:55] duration metric: took 196.353021ms for default service account to be created ...
	I0422 18:31:14.878590   77400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:31:15.081385   77400 system_pods.go:86] 9 kube-system pods found
	I0422 18:31:15.081415   77400 system_pods.go:89] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:15.081425   77400 system_pods.go:89] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:15.081430   77400 system_pods.go:89] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:15.081434   77400 system_pods.go:89] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:15.081438   77400 system_pods.go:89] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:15.081448   77400 system_pods.go:89] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:15.081452   77400 system_pods.go:89] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:15.081458   77400 system_pods.go:89] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:15.081464   77400 system_pods.go:89] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:15.081476   77400 system_pods.go:126] duration metric: took 202.881032ms to wait for k8s-apps to be running ...
	I0422 18:31:15.081484   77400 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:31:15.081530   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:15.098245   77400 system_svc.go:56] duration metric: took 16.748933ms WaitForService to wait for kubelet
	I0422 18:31:15.098278   77400 kubeadm.go:576] duration metric: took 3.738847086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:31:15.098302   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:31:15.278812   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:31:15.278839   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:31:15.278848   77400 node_conditions.go:105] duration metric: took 180.541553ms to run NodePressure ...
	I0422 18:31:15.278859   77400 start.go:240] waiting for startup goroutines ...
	I0422 18:31:15.278866   77400 start.go:245] waiting for cluster config update ...
	I0422 18:31:15.278875   77400 start.go:254] writing updated cluster config ...
	I0422 18:31:15.279242   77400 ssh_runner.go:195] Run: rm -f paused
	I0422 18:31:15.330788   77400 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:31:15.333274   77400 out.go:177] * Done! kubectl is now configured to use "no-preload-407991" cluster and "default" namespace by default
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.251703687Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9d27ffd760dd1334628f476ab690f841decc9a317b03cb1a5d2c8337bbcbba9c,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-lv49p,Uid:e99119a1-18ac-4ce8-ab9d-5cbbeddc243b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810600521158611,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-lv49p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e99119a1-18ac-4ce8-ab9d-5cbbeddc243b,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:30:00.213383509Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4f515603-72e0-4408-9180-1010cf97877d,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810600314337447,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-22T18:29:59.988392019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&PodSandboxMetadata{Name:kube-proxy-6qsdm,Uid:a79875f5-4fdf-4a0e-9bfc-985fda10a906,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810599262913375,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:29:58.344610989Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-44bfz,Ui
d:70b8e7df-e60e-441c-8249-5eebb9a4409c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810598798327067,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b8e7df-e60e-441c-8249-5eebb9a4409c,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:29:58.461765608Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-425zd,Uid:70c9e268-0ecd-4d68-aac9-b979888bfd95,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810598753151860,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,k8s-app: kube-dns,pod-templa
te-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:29:58.443493952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-782377,Uid:f2fafddd9940494ad294a48e8603a8e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810579016043963,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f2fafddd9940494ad294a48e8603a8e3,kubernetes.io/config.seen: 2024-04-22T18:29:38.566365777Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&PodSandboxMetadata{Name:kube-apiserver
-embed-certs-782377,Uid:73eef8b6c0004e5c37db86236681b5e2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810579010460963,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.114:8443,kubernetes.io/config.hash: 73eef8b6c0004e5c37db86236681b5e2,kubernetes.io/config.seen: 2024-04-22T18:29:38.566362731Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-782377,Uid:2bdccc9980979127d4755cbda0fbecd7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810579009458443,Labels:map[string]string{component: kube-controller-mana
ger,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bdccc9980979127d4755cbda0fbecd7,kubernetes.io/config.seen: 2024-04-22T18:29:38.566364072Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-782377,Uid:01f859357e4afdb12fb42a95a16952b1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810579002414826,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.114:2379,kubernetes.io/config.hash: 01f859357e4afdb12fb42a95a16952b1,kubernetes.io/config.seen: 2024-04-22T18:29:38.566357051Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f16c4da4-79b3-4726-b518-1343ff525d73 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.252496662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71c1ceac-f007-4553-9bf0-25fe366fe108 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.252553856Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71c1ceac-f007-4553-9bf0-25fe366fe108 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.252775275Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71c1ceac-f007-4553-9bf0-25fe366fe108 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.264327766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a98e91e5-0e61-4ebb-863e-55c37c270d80 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.264400448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a98e91e5-0e61-4ebb-863e-55c37c270d80 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.266083687Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=127ad4e2-0ec5-45c4-b1e5-94e9b88bb159 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.266545481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811144266522906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=127ad4e2-0ec5-45c4-b1e5-94e9b88bb159 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.268020028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6cce99e8-8fb3-4063-af84-5dd8ab9234f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.268104264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6cce99e8-8fb3-4063-af84-5dd8ab9234f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.268467509Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6cce99e8-8fb3-4063-af84-5dd8ab9234f3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.307743547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e8c4cba-b69e-490b-89b5-f0ad5a4b8540 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.307842779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e8c4cba-b69e-490b-89b5-f0ad5a4b8540 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.309439616Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3696c10a-275c-4d35-8f9c-47ca8cbb1f0f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.310058318Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811144309810076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3696c10a-275c-4d35-8f9c-47ca8cbb1f0f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.310667046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0947040f-4ff0-4a8c-aeef-f282dcc905c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.310768627Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0947040f-4ff0-4a8c-aeef-f282dcc905c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.311829754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0947040f-4ff0-4a8c-aeef-f282dcc905c9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.353033285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15966aee-8efb-445c-b33b-8b2f2a004eb6 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.353109125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15966aee-8efb-445c-b33b-8b2f2a004eb6 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.354366991Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95a46ae4-a25d-449e-be27-af5ac0990432 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.354740095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811144354716519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95a46ae4-a25d-449e-be27-af5ac0990432 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.355286110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86cf980b-5fcf-48b5-96df-fcf870702be2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.355335947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86cf980b-5fcf-48b5-96df-fcf870702be2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:04 embed-certs-782377 crio[724]: time="2024-04-22 18:39:04.355682797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86cf980b-5fcf-48b5-96df-fcf870702be2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0185f4d38b02       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2100946ee89aa       storage-provisioner
	a4d31d6c730b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6ea02c2d9d964       coredns-7db6d8ff4d-425zd
	b866a8972ff20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   6182a53be2d9c       coredns-7db6d8ff4d-44bfz
	0db3e27ffbb20       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   1da8deaadb5a1       kube-proxy-6qsdm
	c5908d6f96605       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   0510ebb35da7b       kube-controller-manager-embed-certs-782377
	081ed6bcbd5ca       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   0f8b5a120c61e       kube-scheduler-embed-certs-782377
	b22085c535e9c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   1e88cb01978f5       etcd-embed-certs-782377
	01c3e02d8cb9a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   c3dbb79a78f3e       kube-apiserver-embed-certs-782377
	
	
	==> coredns [a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-782377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-782377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=embed-certs-782377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:29:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-782377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:38:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:35:12 +0000   Mon, 22 Apr 2024 18:29:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:35:12 +0000   Mon, 22 Apr 2024 18:29:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:35:12 +0000   Mon, 22 Apr 2024 18:29:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:35:12 +0000   Mon, 22 Apr 2024 18:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    embed-certs-782377
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e9919cbcdef4481b79ec61d03881f1d
	  System UUID:                6e9919cb-cdef-4481-b79e-c61d03881f1d
	  Boot ID:                    377d73fc-c18b-4f21-a34d-ee8dade6c327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-425zd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-44bfz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-embed-certs-782377                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-embed-certs-782377             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-782377    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-6qsdm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-embed-certs-782377             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-lv49p               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-782377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-782377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-782377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node embed-certs-782377 event: Registered Node embed-certs-782377 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052644] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040412] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.572143] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.740035] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.409896] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.879707] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.116345] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.181736] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.149072] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.308485] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.597861] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.064120] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.204501] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +4.617472] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.448903] kauditd_printk_skb: 79 callbacks suppressed
	[Apr22 18:29] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.629663] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[  +4.468145] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.077990] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[ +13.994816] systemd-fstab-generator[4098]: Ignoring "noauto" option for root device
	[  +0.080595] kauditd_printk_skb: 14 callbacks suppressed
	[Apr22 18:30] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781] <==
	{"level":"info","ts":"2024-04-22T18:29:39.763628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 switched to configuration voters=(17357627813233571301)"}
	{"level":"info","ts":"2024-04-22T18:29:39.764167Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"659e1302ad88139d","local-member-id":"f0e2ae880f3a35e5","added-peer-id":"f0e2ae880f3a35e5","added-peer-peer-urls":["https://192.168.50.114:2380"]}
	{"level":"info","ts":"2024-04-22T18:29:39.764673Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:29:39.768285Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f0e2ae880f3a35e5","initial-advertise-peer-urls":["https://192.168.50.114:2380"],"listen-peer-urls":["https://192.168.50.114:2380"],"advertise-client-urls":["https://192.168.50.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:29:39.770086Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:29:39.764717Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-04-22T18:29:39.770566Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-04-22T18:29:40.613261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:40.613322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:40.613364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgPreVoteResp from f0e2ae880f3a35e5 at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:40.613378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.613384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgVoteResp from f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.613392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became leader at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.61341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0e2ae880f3a35e5 elected leader f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.615207Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.616507Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f0e2ae880f3a35e5","local-member-attributes":"{Name:embed-certs-782377 ClientURLs:[https://192.168.50.114:2379]}","request-path":"/0/members/f0e2ae880f3a35e5/attributes","cluster-id":"659e1302ad88139d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:29:40.616736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:40.61718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:40.617436Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"659e1302ad88139d","local-member-id":"f0e2ae880f3a35e5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.617524Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.617566Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.617993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:40.618027Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:40.619323Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:29:40.629744Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.114:2379"}
	
	
	==> kernel <==
	 18:39:04 up 14 min,  0 users,  load average: 0.10, 0.22, 0.14
	Linux embed-certs-782377 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57] <==
	I0422 18:33:01.086612       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:34:42.154199       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:34:42.154519       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0422 18:34:43.155310       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:34:43.155430       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:34:43.155439       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:34:43.155545       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:34:43.155652       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:34:43.156801       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:35:43.155627       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:35:43.155730       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:35:43.155743       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:35:43.157013       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:35:43.157066       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:35:43.157076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:37:43.157223       1 handler_proxy.go:93] no RequestInfo found in the context
	W0422 18:37:43.157221       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:37:43.157681       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:37:43.157703       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0422 18:37:43.157787       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:37:43.159234       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70] <==
	I0422 18:33:28.544997       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:33:58.109798       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:33:58.554080       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:34:28.115514       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:34:28.562777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:34:58.121098       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:34:58.571767       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:35:28.126502       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:35:28.579609       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:35:58.132329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:35:58.589614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:36:09.023417       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="276.264µs"
	I0422 18:36:22.016080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="190.753µs"
	E0422 18:36:28.137805       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:36:28.599176       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:36:58.144552       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:36:58.611620       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:37:28.149904       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:37:28.619792       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:37:58.156727       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:37:58.628386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:38:28.162329       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:38:28.636227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:38:58.169031       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:38:58.644834       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732] <==
	I0422 18:30:00.301732       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:30:00.371486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.114"]
	I0422 18:30:00.536057       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:30:00.536100       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:30:00.536117       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:30:00.540398       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:30:00.540667       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:30:00.540710       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:30:00.542303       1 config.go:192] "Starting service config controller"
	I0422 18:30:00.542318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:30:00.542345       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:30:00.542349       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:30:00.542879       1 config.go:319] "Starting node config controller"
	I0422 18:30:00.542889       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:30:00.642956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:30:00.643064       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:30:00.643310       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9] <==
	W0422 18:29:42.223553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:29:42.223592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 18:29:43.064481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 18:29:43.064535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 18:29:43.084215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 18:29:43.084273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 18:29:43.135334       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:29:43.135446       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:29:43.231247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:29:43.231364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 18:29:43.270993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 18:29:43.271058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 18:29:43.334255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 18:29:43.334309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 18:29:43.334362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:29:43.334372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:29:43.372723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:29:43.372814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:29:43.372862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:29:43.372870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 18:29:43.391603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 18:29:43.391662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 18:29:43.418254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 18:29:43.418306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0422 18:29:45.303530       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:36:45 embed-certs-782377 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:36:45 embed-certs-782377 kubelet[3905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:36:45 embed-certs-782377 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:36:45 embed-certs-782377 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:36:47 embed-certs-782377 kubelet[3905]: E0422 18:36:47.998253    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:36:59 embed-certs-782377 kubelet[3905]: E0422 18:36:59.997866    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:37:14 embed-certs-782377 kubelet[3905]: E0422 18:37:14.999287    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:37:29 embed-certs-782377 kubelet[3905]: E0422 18:37:29.997782    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:37:43 embed-certs-782377 kubelet[3905]: E0422 18:37:43.997622    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:37:45 embed-certs-782377 kubelet[3905]: E0422 18:37:45.028551    3905 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:37:45 embed-certs-782377 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:37:45 embed-certs-782377 kubelet[3905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:37:45 embed-certs-782377 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:37:45 embed-certs-782377 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:37:57 embed-certs-782377 kubelet[3905]: E0422 18:37:57.998144    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:38:08 embed-certs-782377 kubelet[3905]: E0422 18:38:08.997882    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:38:22 embed-certs-782377 kubelet[3905]: E0422 18:38:22.999048    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:38:36 embed-certs-782377 kubelet[3905]: E0422 18:38:36.998361    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:38:45 embed-certs-782377 kubelet[3905]: E0422 18:38:45.031877    3905 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:38:45 embed-certs-782377 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:38:45 embed-certs-782377 kubelet[3905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:38:45 embed-certs-782377 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:38:45 embed-certs-782377 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:38:49 embed-certs-782377 kubelet[3905]: E0422 18:38:49.998859    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:39:02 embed-certs-782377 kubelet[3905]: E0422 18:39:02.999284    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	
	
	==> storage-provisioner [c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756] <==
	I0422 18:30:00.772376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:30:00.791396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:30:00.791465       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:30:00.840872       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:30:00.841251       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-782377_d0c4b64e-30dc-4fc5-9911-6e54bec8a68a!
	I0422 18:30:00.845040       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c42af0d-a36f-47e6-9d2c-00802569f696", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-782377_d0c4b64e-30dc-4fc5-9911-6e54bec8a68a became leader
	I0422 18:30:00.941911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-782377_d0c4b64e-30dc-4fc5-9911-6e54bec8a68a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782377 -n embed-certs-782377
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-782377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-lv49p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-782377 describe pod metrics-server-569cc877fc-lv49p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-782377 describe pod metrics-server-569cc877fc-lv49p: exit status 1 (62.720577ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-lv49p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-782377 describe pod metrics-server-569cc877fc-lv49p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0422 18:31:03.542004   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-22 18:39:22.901381751 +0000 UTC m=+6147.630995357
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-856422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-856422 logs -n 25: (2.140820334s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo cat                              | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:21:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:21:45.719482   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:48.791433   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:54.871446   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:57.943441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:04.023441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:07.095417   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:13.175430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:16.247522   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:22.327414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:25.399441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:31.479440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:34.551439   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:40.631451   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:43.703447   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:49.783400   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:52.855484   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:58.935464   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:02.007435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:08.087442   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:11.159452   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:17.239435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:20.311430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:26.391420   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:29.463418   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:35.543443   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:38.615421   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:44.695419   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:47.767475   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:53.847471   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:56.919436   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:02.999404   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:06.071458   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:12.151440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:15.223414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:18.227587   77634 start.go:364] duration metric: took 4m29.759611802s to acquireMachinesLock for "embed-certs-782377"
	I0422 18:24:18.227650   77634 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:18.227661   77634 fix.go:54] fixHost starting: 
	I0422 18:24:18.227979   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:18.228013   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:18.243001   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 18:24:18.243415   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:18.243835   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:24:18.243850   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:18.244219   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:18.244384   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:18.244534   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:24:18.246202   77634 fix.go:112] recreateIfNeeded on embed-certs-782377: state=Stopped err=<nil>
	I0422 18:24:18.246228   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	W0422 18:24:18.246399   77634 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:18.248257   77634 out.go:177] * Restarting existing kvm2 VM for "embed-certs-782377" ...
	I0422 18:24:18.249777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Start
	I0422 18:24:18.249966   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring networks are active...
	I0422 18:24:18.250666   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network default is active
	I0422 18:24:18.251036   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network mk-embed-certs-782377 is active
	I0422 18:24:18.251499   77634 main.go:141] libmachine: (embed-certs-782377) Getting domain xml...
	I0422 18:24:18.252150   77634 main.go:141] libmachine: (embed-certs-782377) Creating domain...
	I0422 18:24:18.225125   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:18.225168   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225565   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:24:18.225593   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225781   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:24:18.227460   77400 machine.go:97] duration metric: took 4m37.410379606s to provisionDockerMachine
	I0422 18:24:18.227495   77400 fix.go:56] duration metric: took 4m37.433636251s for fixHost
	I0422 18:24:18.227499   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 4m37.433656207s
	W0422 18:24:18.227517   77400 start.go:713] error starting host: provision: host is not running
	W0422 18:24:18.227584   77400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0422 18:24:18.227593   77400 start.go:728] Will try again in 5 seconds ...
	I0422 18:24:19.442937   77634 main.go:141] libmachine: (embed-certs-782377) Waiting to get IP...
	I0422 18:24:19.444048   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.444425   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.444484   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.444392   78906 retry.go:31] will retry after 283.008432ms: waiting for machine to come up
	I0422 18:24:19.729076   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.729457   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.729493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.729411   78906 retry.go:31] will retry after 252.047573ms: waiting for machine to come up
	I0422 18:24:19.983011   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.983417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.983442   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.983397   78906 retry.go:31] will retry after 300.528755ms: waiting for machine to come up
	I0422 18:24:20.286039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.286467   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.286500   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.286425   78906 retry.go:31] will retry after 426.555496ms: waiting for machine to come up
	I0422 18:24:20.715191   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.715601   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.715638   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.715525   78906 retry.go:31] will retry after 533.433633ms: waiting for machine to come up
	I0422 18:24:21.250151   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:21.250702   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:21.250732   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:21.250646   78906 retry.go:31] will retry after 854.033547ms: waiting for machine to come up
	I0422 18:24:22.106728   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.107083   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.107109   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.107036   78906 retry.go:31] will retry after 761.233698ms: waiting for machine to come up
	I0422 18:24:22.870007   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.870408   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.870435   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.870364   78906 retry.go:31] will retry after 1.121568589s: waiting for machine to come up
	I0422 18:24:23.229316   77400 start.go:360] acquireMachinesLock for no-preload-407991: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:23.993127   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:23.993600   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:23.993623   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:23.993535   78906 retry.go:31] will retry after 1.525222377s: waiting for machine to come up
	I0422 18:24:25.520203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:25.520584   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:25.520609   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:25.520557   78906 retry.go:31] will retry after 1.618927059s: waiting for machine to come up
	I0422 18:24:27.140862   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:27.141363   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:27.141391   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:27.141315   78906 retry.go:31] will retry after 1.828869827s: waiting for machine to come up
	I0422 18:24:28.972053   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:28.972472   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:28.972508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:28.972438   78906 retry.go:31] will retry after 2.456935091s: waiting for machine to come up
	I0422 18:24:31.430825   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:31.431208   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:31.431266   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:31.431181   78906 retry.go:31] will retry after 3.415431602s: waiting for machine to come up
	I0422 18:24:36.144008   77929 start.go:364] duration metric: took 4m11.537292071s to acquireMachinesLock for "default-k8s-diff-port-856422"
	I0422 18:24:36.144073   77929 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:36.144079   77929 fix.go:54] fixHost starting: 
	I0422 18:24:36.144413   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:36.144450   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:36.161253   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0422 18:24:36.161715   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:36.162147   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:24:36.162166   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:36.162536   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:36.162743   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:36.162914   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:24:36.164366   77929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-856422: state=Stopped err=<nil>
	I0422 18:24:36.164397   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	W0422 18:24:36.164563   77929 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:36.166915   77929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-856422" ...
	I0422 18:24:34.847819   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848316   77634 main.go:141] libmachine: (embed-certs-782377) Found IP for machine: 192.168.50.114
	I0422 18:24:34.848339   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has current primary IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848357   77634 main.go:141] libmachine: (embed-certs-782377) Reserving static IP address...
	I0422 18:24:34.848741   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.848769   77634 main.go:141] libmachine: (embed-certs-782377) DBG | skip adding static IP to network mk-embed-certs-782377 - found existing host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"}
	I0422 18:24:34.848782   77634 main.go:141] libmachine: (embed-certs-782377) Reserved static IP address: 192.168.50.114
	I0422 18:24:34.848801   77634 main.go:141] libmachine: (embed-certs-782377) Waiting for SSH to be available...
	I0422 18:24:34.848808   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Getting to WaitForSSH function...
	I0422 18:24:34.850829   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851167   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.851199   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851332   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH client type: external
	I0422 18:24:34.851352   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa (-rw-------)
	I0422 18:24:34.851383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:34.851402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | About to run SSH command:
	I0422 18:24:34.851417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | exit 0
	I0422 18:24:34.975383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:34.975812   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetConfigRaw
	I0422 18:24:34.976602   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:34.979578   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.979959   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.979992   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.980238   77634 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:24:34.980472   77634 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:34.980497   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:34.980777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:34.983493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.983958   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.983999   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.984175   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:34.984372   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984710   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:34.984894   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:34.985074   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:34.985086   77634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:35.099838   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:35.099873   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100144   77634 buildroot.go:166] provisioning hostname "embed-certs-782377"
	I0422 18:24:35.100169   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100381   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.103203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103589   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.103618   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103754   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.103930   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104116   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104262   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.104446   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.104696   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.104720   77634 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-782377 && echo "embed-certs-782377" | sudo tee /etc/hostname
	I0422 18:24:35.223934   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-782377
	
	I0422 18:24:35.223962   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.227033   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227376   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.227413   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.227779   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.227976   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.228140   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.228334   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.228492   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.228508   77634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-782377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-782377/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-782377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:35.346513   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:35.346545   77634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:35.346561   77634 buildroot.go:174] setting up certificates
	I0422 18:24:35.346571   77634 provision.go:84] configureAuth start
	I0422 18:24:35.346598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.346898   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:35.349820   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350164   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.350192   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350301   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.352921   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353288   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.353314   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353488   77634 provision.go:143] copyHostCerts
	I0422 18:24:35.353543   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:35.353552   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:35.353619   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:35.353717   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:35.353725   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:35.353749   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:35.353801   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:35.353810   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:35.353831   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:35.353894   77634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.embed-certs-782377 san=[127.0.0.1 192.168.50.114 embed-certs-782377 localhost minikube]
	I0422 18:24:35.463676   77634 provision.go:177] copyRemoteCerts
	I0422 18:24:35.463733   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:35.463758   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.466567   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.467039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.467415   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.467605   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.467740   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.549947   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:35.576364   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:24:35.601539   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:35.625959   77634 provision.go:87] duration metric: took 279.37435ms to configureAuth
	I0422 18:24:35.625992   77634 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:35.626171   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:35.626235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.629095   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.629533   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629707   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.629934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630077   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630238   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.630365   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.630546   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.630563   77634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:35.906862   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:35.906892   77634 machine.go:97] duration metric: took 926.403466ms to provisionDockerMachine
	I0422 18:24:35.906905   77634 start.go:293] postStartSetup for "embed-certs-782377" (driver="kvm2")
	I0422 18:24:35.906916   77634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:35.906934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:35.907241   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:35.907277   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.910029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.910438   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910599   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.910782   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.910993   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.911168   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.994189   77634 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:35.998376   77634 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:35.998395   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:35.998468   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:35.998545   77634 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:35.998650   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:36.008268   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:36.034031   77634 start.go:296] duration metric: took 127.110389ms for postStartSetup
	I0422 18:24:36.034081   77634 fix.go:56] duration metric: took 17.806421597s for fixHost
	I0422 18:24:36.034100   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.036964   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037357   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.037380   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.037775   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038051   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.038403   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:36.038568   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:36.038579   77634 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:36.143878   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810276.108619822
	
	I0422 18:24:36.143903   77634 fix.go:216] guest clock: 1713810276.108619822
	I0422 18:24:36.143911   77634 fix.go:229] Guest: 2024-04-22 18:24:36.108619822 +0000 UTC Remote: 2024-04-22 18:24:36.034084746 +0000 UTC m=+287.715620683 (delta=74.535076ms)
	I0422 18:24:36.143936   77634 fix.go:200] guest clock delta is within tolerance: 74.535076ms
	I0422 18:24:36.143941   77634 start.go:83] releasing machines lock for "embed-certs-782377", held for 17.916313877s
	I0422 18:24:36.143966   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.144235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:36.146867   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147228   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.147257   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147431   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.147883   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148066   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148171   77634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:36.148218   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.148377   77634 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:36.148403   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.150838   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151150   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151176   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151268   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151296   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.151466   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.151628   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.151671   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151695   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151747   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.151880   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.152055   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.152209   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.152350   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.229109   77634 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:36.266621   77634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:36.421344   77634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:36.427814   77634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:36.427892   77634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:36.448157   77634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:36.448192   77634 start.go:494] detecting cgroup driver to use...
	I0422 18:24:36.448255   77634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:36.468930   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:36.485780   77634 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:36.485856   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:36.502182   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:36.521179   77634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:36.636244   77634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:36.783292   77634 docker.go:233] disabling docker service ...
	I0422 18:24:36.783366   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:36.803014   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:36.817938   77634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:36.957954   77634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:37.085750   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:37.101054   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:37.123504   77634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:37.123555   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.134422   77634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:37.134491   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.145961   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.157192   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.170117   77634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:37.188656   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.205792   77634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.225739   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.236719   77634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:37.246351   77634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:37.246401   77634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:37.261144   77634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:37.271464   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:37.395686   77634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:37.534079   77634 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:37.534156   77634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:37.539212   77634 start.go:562] Will wait 60s for crictl version
	I0422 18:24:37.539285   77634 ssh_runner.go:195] Run: which crictl
	I0422 18:24:37.543239   77634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:37.581460   77634 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:37.581562   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.611743   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.645811   77634 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:37.647247   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:37.650321   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.650811   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:37.650841   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.651055   77634 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:37.655865   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:37.673617   77634 kubeadm.go:877] updating cluster {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:37.673732   77634 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:37.673785   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:37.718534   77634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:37.718609   77634 ssh_runner.go:195] Run: which lz4
	I0422 18:24:37.723369   77634 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:37.728270   77634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:37.728303   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:36.168344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Start
	I0422 18:24:36.168494   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring networks are active...
	I0422 18:24:36.169419   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network default is active
	I0422 18:24:36.169811   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network mk-default-k8s-diff-port-856422 is active
	I0422 18:24:36.170341   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Getting domain xml...
	I0422 18:24:36.171019   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Creating domain...
	I0422 18:24:37.407148   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting to get IP...
	I0422 18:24:37.408083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408430   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408509   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.408416   79040 retry.go:31] will retry after 267.855158ms: waiting for machine to come up
	I0422 18:24:37.677765   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678134   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678168   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.678084   79040 retry.go:31] will retry after 267.61504ms: waiting for machine to come up
	I0422 18:24:37.947737   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948250   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.948216   79040 retry.go:31] will retry after 351.088664ms: waiting for machine to come up
	I0422 18:24:38.300548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301057   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301090   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.301011   79040 retry.go:31] will retry after 560.164848ms: waiting for machine to come up
	I0422 18:24:38.862557   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863114   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.863075   79040 retry.go:31] will retry after 590.286684ms: waiting for machine to come up
	I0422 18:24:39.454925   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455483   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455510   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:39.455428   79040 retry.go:31] will retry after 870.474888ms: waiting for machine to come up
	I0422 18:24:39.338447   77634 crio.go:462] duration metric: took 1.615205556s to copy over tarball
	I0422 18:24:39.338545   77634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:41.640474   77634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301883484s)
	I0422 18:24:41.640514   77634 crio.go:469] duration metric: took 2.302038123s to extract the tarball
	I0422 18:24:41.640524   77634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:24:41.680325   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:41.724755   77634 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:24:41.724777   77634 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:24:41.724785   77634 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.30.0 crio true true} ...
	I0422 18:24:41.724887   77634 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-782377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:24:41.724964   77634 ssh_runner.go:195] Run: crio config
	I0422 18:24:41.772680   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:41.772704   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:41.772715   77634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:24:41.772733   77634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-782377 NodeName:embed-certs-782377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:24:41.772898   77634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-782377"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:24:41.772964   77634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:24:41.783492   77634 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:24:41.783575   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:24:41.793500   77634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0422 18:24:41.810415   77634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:24:41.827504   77634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0422 18:24:41.845704   77634 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0422 18:24:41.849728   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:41.862798   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:41.998260   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:24:42.018779   77634 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377 for IP: 192.168.50.114
	I0422 18:24:42.018801   77634 certs.go:194] generating shared ca certs ...
	I0422 18:24:42.018820   77634 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:24:42.018977   77634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:24:42.019034   77634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:24:42.019048   77634 certs.go:256] generating profile certs ...
	I0422 18:24:42.019146   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/client.key
	I0422 18:24:42.019218   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key.d804c20e
	I0422 18:24:42.019298   77634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key
	I0422 18:24:42.019455   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:24:42.019493   77634 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:24:42.019509   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:24:42.019539   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:24:42.019571   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:24:42.019606   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:24:42.019665   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:42.020460   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:24:42.065297   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:24:42.098581   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:24:42.139751   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:24:42.169770   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:24:42.199958   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:24:42.229298   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:24:42.254517   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:24:42.279390   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:24:42.303872   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:24:42.329704   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:24:42.355108   77634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:24:42.372684   77634 ssh_runner.go:195] Run: openssl version
	I0422 18:24:42.378631   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:24:42.389709   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394492   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394552   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.400346   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:24:42.411335   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:24:42.422568   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427213   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427278   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.433277   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:24:42.444618   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:24:42.455793   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460681   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460739   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.466785   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:24:42.485401   77634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:24:42.491205   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:24:42.498635   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:24:42.510577   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:24:42.517596   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:24:42.524413   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:24:42.530872   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:24:42.537199   77634 kubeadm.go:391] StartCluster: {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:24:42.537319   77634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:24:42.537379   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.579863   77634 cri.go:89] found id: ""
	I0422 18:24:42.579944   77634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:24:42.590756   77634 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:24:42.590781   77634 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:24:42.590788   77634 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:24:42.590844   77634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:24:42.601517   77634 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:24:42.603120   77634 kubeconfig.go:125] found "embed-certs-782377" server: "https://192.168.50.114:8443"
	I0422 18:24:42.606189   77634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:24:42.616881   77634 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0422 18:24:42.616911   77634 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:24:42.616922   77634 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:24:42.616970   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.656829   77634 cri.go:89] found id: ""
	I0422 18:24:42.656923   77634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:24:42.675575   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:24:42.686408   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:24:42.686431   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:24:42.686484   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:24:42.697303   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:24:42.697391   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:24:42.707693   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:24:42.717836   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:24:42.717932   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:24:42.729952   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.740902   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:24:42.740980   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.751946   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:24:42.761758   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:24:42.761830   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:24:42.772699   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:24:42.783018   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:42.891737   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:40.327325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327782   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:40.327726   79040 retry.go:31] will retry after 926.321969ms: waiting for machine to come up
	I0422 18:24:41.255601   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256117   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:41.256072   79040 retry.go:31] will retry after 928.33371ms: waiting for machine to come up
	I0422 18:24:42.186290   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186798   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186826   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:42.186762   79040 retry.go:31] will retry after 1.708117553s: waiting for machine to come up
	I0422 18:24:43.896236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:43.896597   79040 retry.go:31] will retry after 1.720003793s: waiting for machine to come up
	I0422 18:24:44.055395   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163622709s)
	I0422 18:24:44.055429   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.278840   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.351743   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.460115   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:24:44.460202   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:44.960631   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.460588   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.478048   77634 api_server.go:72] duration metric: took 1.017932232s to wait for apiserver process to appear ...
	I0422 18:24:45.478082   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:24:45.478104   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:45.478702   77634 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0422 18:24:45.978527   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.247298   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:24:48.247334   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:24:48.247351   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.295953   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.296005   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.478899   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.488884   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.488920   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.978472   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.992521   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.992552   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:49.479179   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:49.485588   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:24:49.493015   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:24:49.493055   77634 api_server.go:131] duration metric: took 4.01496465s to wait for apiserver health ...
	I0422 18:24:49.493065   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:49.493074   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:49.494997   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:24:45.618240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618714   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618744   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:45.618673   79040 retry.go:31] will retry after 2.396679945s: waiting for machine to come up
	I0422 18:24:48.016812   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017231   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017258   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:48.017197   79040 retry.go:31] will retry after 2.304959564s: waiting for machine to come up
	I0422 18:24:49.496476   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:24:49.516525   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:24:49.541103   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:24:49.552224   77634 system_pods.go:59] 8 kube-system pods found
	I0422 18:24:49.552263   77634 system_pods.go:61] "coredns-7db6d8ff4d-lxcv2" [137ad3db-8bc5-4b7f-8eb0-12a278eba41c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:24:49.552273   77634 system_pods.go:61] "etcd-embed-certs-782377" [85322e31-1ad6-4239-8086-f2a465a28d8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:24:49.552287   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [e791d7d4-a94d-4cce-a50d-4e569350f210] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:24:49.552307   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [cbcc2e7f-7b3a-435b-97d5-5b69b7e399c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:24:49.552317   77634 system_pods.go:61] "kube-proxy-r4249" [7ffb3b8f-53d8-45df-8426-74f0ffb0d20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 18:24:49.552327   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [9568040b-3eca-403e-b078-d6f2071e70c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:24:49.552335   77634 system_pods.go:61] "metrics-server-569cc877fc-d8s5p" [3bcda1df-02f7-4405-95c7-4d8559a0138c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:24:49.552342   77634 system_pods.go:61] "storage-provisioner" [c196d779-346a-4e3f-b1c3-dde4292df017] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:24:49.552351   77634 system_pods.go:74] duration metric: took 11.221599ms to wait for pod list to return data ...
	I0422 18:24:49.552373   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:24:49.556086   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:24:49.556130   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:24:49.556142   77634 node_conditions.go:105] duration metric: took 3.764067ms to run NodePressure ...
	I0422 18:24:49.556161   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:49.852023   77634 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856866   77634 kubeadm.go:733] kubelet initialised
	I0422 18:24:49.856894   77634 kubeadm.go:734] duration metric: took 4.83996ms waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856904   77634 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:24:49.863808   77634 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.868817   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868840   77634 pod_ready.go:81] duration metric: took 5.001181ms for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.868849   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868855   77634 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.873591   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873612   77634 pod_ready.go:81] duration metric: took 4.750292ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.873621   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873627   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.878471   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878494   77634 pod_ready.go:81] duration metric: took 4.859998ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.878503   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878510   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.945869   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945909   77634 pod_ready.go:81] duration metric: took 67.385628ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.945923   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945932   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345633   77634 pod_ready.go:92] pod "kube-proxy-r4249" in "kube-system" namespace has status "Ready":"True"
	I0422 18:24:50.345655   77634 pod_ready.go:81] duration metric: took 399.713725ms for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345666   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:52.352988   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:50.324396   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324920   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324953   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:50.324894   79040 retry.go:31] will retry after 4.018790507s: waiting for machine to come up
	I0422 18:24:54.347584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348046   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Found IP for machine: 192.168.61.206
	I0422 18:24:54.348081   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has current primary IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348094   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserving static IP address...
	I0422 18:24:54.348535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserved static IP address: 192.168.61.206
	I0422 18:24:54.348560   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for SSH to be available...
	I0422 18:24:54.348584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.348624   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | skip adding static IP to network mk-default-k8s-diff-port-856422 - found existing host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"}
	I0422 18:24:54.348640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Getting to WaitForSSH function...
	I0422 18:24:54.351069   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351570   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.351608   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH client type: external
	I0422 18:24:54.351758   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa (-rw-------)
	I0422 18:24:54.351793   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:54.351810   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | About to run SSH command:
	I0422 18:24:54.351834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | exit 0
	I0422 18:24:54.479277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:54.479674   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetConfigRaw
	I0422 18:24:54.480350   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.483089   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.483498   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483801   77929 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:24:54.484031   77929 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:54.484051   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:54.484272   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.486449   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.486857   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486992   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.487178   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487470   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.487635   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.487825   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.487838   77929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:54.603732   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:54.603759   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.603993   77929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-856422"
	I0422 18:24:54.604017   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.604280   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.606938   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607302   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.607331   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607524   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.607693   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.607856   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.608002   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.608174   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.608381   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.608398   77929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-856422 && echo "default-k8s-diff-port-856422" | sudo tee /etc/hostname
	I0422 18:24:54.734622   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-856422
	
	I0422 18:24:54.734646   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.737804   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738109   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.738141   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.738495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738773   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.738950   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.739157   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.739176   77929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-856422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-856422/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-856422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:54.864646   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:54.864679   77929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:54.864732   77929 buildroot.go:174] setting up certificates
	I0422 18:24:54.864745   77929 provision.go:84] configureAuth start
	I0422 18:24:54.864764   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.865059   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.868205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868626   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.868666   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868868   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.871736   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872118   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.872147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872275   77929 provision.go:143] copyHostCerts
	I0422 18:24:54.872340   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:54.872353   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:54.872424   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:54.872545   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:54.872557   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:54.872598   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:54.872676   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:54.872688   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:54.872718   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:54.872794   77929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-856422 san=[127.0.0.1 192.168.61.206 default-k8s-diff-port-856422 localhost minikube]
	I0422 18:24:55.091765   77929 provision.go:177] copyRemoteCerts
	I0422 18:24:55.091820   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:55.091848   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.094572   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.094939   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.094970   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.095209   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.095501   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.095767   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.095958   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.192243   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:55.223313   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 18:24:55.250149   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:55.279442   77929 provision.go:87] duration metric: took 414.679508ms to configureAuth
	I0422 18:24:55.279474   77929 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:55.280056   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:55.280125   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.282806   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.283237   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283405   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.283636   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283803   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283941   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.284109   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.284276   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.284294   77929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:55.565199   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:55.565225   77929 machine.go:97] duration metric: took 1.081180365s to provisionDockerMachine
	I0422 18:24:55.565239   77929 start.go:293] postStartSetup for "default-k8s-diff-port-856422" (driver="kvm2")
	I0422 18:24:55.565282   77929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:55.565312   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.565649   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:55.565682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.568211   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.568614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568809   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.568994   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.569182   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.569352   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.654461   77929 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:55.658992   77929 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:55.659016   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:55.659091   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:55.659199   77929 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:55.659309   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:55.669183   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:55.694953   77929 start.go:296] duration metric: took 129.698973ms for postStartSetup
	I0422 18:24:55.694998   77929 fix.go:56] duration metric: took 19.550918724s for fixHost
	I0422 18:24:55.695021   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.697596   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.697926   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.697958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.698133   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.698325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698479   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698579   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.698680   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.698897   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.698914   77929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:55.812106   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810295.778892948
	
	I0422 18:24:55.812132   77929 fix.go:216] guest clock: 1713810295.778892948
	I0422 18:24:55.812143   77929 fix.go:229] Guest: 2024-04-22 18:24:55.778892948 +0000 UTC Remote: 2024-04-22 18:24:55.69500303 +0000 UTC m=+271.245786903 (delta=83.889918ms)
	I0422 18:24:55.812168   77929 fix.go:200] guest clock delta is within tolerance: 83.889918ms
	I0422 18:24:55.812176   77929 start.go:83] releasing machines lock for "default-k8s-diff-port-856422", held for 19.668119564s
	I0422 18:24:55.812213   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.812500   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:55.815404   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.815786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.815828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.816036   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816526   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816698   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816781   77929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:55.816823   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.817092   77929 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:55.817116   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.819495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819710   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819931   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.819958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820045   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.820181   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820217   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820362   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820366   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820631   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.820716   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820845   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.904810   77929 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:55.937093   77929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:56.089389   77929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:56.096144   77929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:56.096208   77929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:56.118194   77929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:56.118224   77929 start.go:494] detecting cgroup driver to use...
	I0422 18:24:56.118292   77929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:56.134918   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:56.154107   77929 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:56.154180   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:56.168971   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:56.188793   77929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:56.310223   77929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:56.492316   77929 docker.go:233] disabling docker service ...
	I0422 18:24:56.492430   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:56.515169   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:56.529734   77929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:56.670628   77929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:56.810823   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:56.826785   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:56.847682   77929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:56.847741   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.860499   77929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:56.860576   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.872086   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.883347   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.901596   77929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:56.916912   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.928121   77929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.947335   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.958431   77929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:56.968077   77929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:56.968131   77929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:56.982135   77929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:56.991801   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:57.125635   77929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:57.263889   77929 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:57.263973   77929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:57.269573   77929 start.go:562] Will wait 60s for crictl version
	I0422 18:24:57.269627   77929 ssh_runner.go:195] Run: which crictl
	I0422 18:24:57.273613   77929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:57.314357   77929 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:57.314463   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.345062   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.380868   77929 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:54.353338   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:56.853757   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:57.382284   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:57.385215   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:57.385655   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385889   77929 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:57.390482   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:57.405644   77929 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:57.405766   77929 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:57.405868   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:57.452528   77929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:57.452604   77929 ssh_runner.go:195] Run: which lz4
	I0422 18:24:57.456903   77929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:57.461373   77929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:57.461411   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:59.060426   77929 crio.go:462] duration metric: took 1.603560712s to copy over tarball
	I0422 18:24:59.060532   77929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:58.854112   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:00.856147   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:01.563849   77929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503290941s)
	I0422 18:25:01.563881   77929 crio.go:469] duration metric: took 2.503413712s to extract the tarball
	I0422 18:25:01.563891   77929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:01.603330   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:01.649885   77929 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:25:01.649909   77929 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:25:01.649916   77929 kubeadm.go:928] updating node { 192.168.61.206 8444 v1.30.0 crio true true} ...
	I0422 18:25:01.650053   77929 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-856422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:01.650143   77929 ssh_runner.go:195] Run: crio config
	I0422 18:25:01.698892   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:01.698915   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:01.698929   77929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:01.698948   77929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.206 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-856422 NodeName:default-k8s-diff-port-856422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:01.699075   77929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.206
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-856422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:01.699150   77929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:01.709830   77929 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:01.709903   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:01.720447   77929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0422 18:25:01.738745   77929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:01.756420   77929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0422 18:25:01.775364   77929 ssh_runner.go:195] Run: grep 192.168.61.206	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:01.779476   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:01.792860   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:01.920607   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:01.939637   77929 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422 for IP: 192.168.61.206
	I0422 18:25:01.939658   77929 certs.go:194] generating shared ca certs ...
	I0422 18:25:01.939675   77929 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:01.939858   77929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:01.939911   77929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:01.939922   77929 certs.go:256] generating profile certs ...
	I0422 18:25:01.940026   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/client.key
	I0422 18:25:01.940105   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key.e8400874
	I0422 18:25:01.940170   77929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key
	I0422 18:25:01.940320   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:01.940386   77929 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:01.940400   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:01.940437   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:01.940474   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:01.940506   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:01.940603   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:01.941408   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:01.981392   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:02.020335   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:02.057221   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:02.088571   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 18:25:02.123716   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:02.153926   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:02.183499   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:02.212438   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:02.238650   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:02.265786   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:02.295001   77929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:02.315343   77929 ssh_runner.go:195] Run: openssl version
	I0422 18:25:02.322001   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:02.334785   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340619   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340686   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.348942   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:02.364960   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:02.381460   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386720   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386794   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.392894   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:02.404951   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:02.417334   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423503   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423573   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.430512   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:02.444132   77929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:02.449749   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:02.456667   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:02.463700   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:02.470474   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:02.477324   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:02.483900   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:02.490614   77929 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:02.490719   77929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:02.490768   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.538766   77929 cri.go:89] found id: ""
	I0422 18:25:02.538849   77929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:02.549686   77929 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:02.549711   77929 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:02.549717   77929 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:02.549794   77929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:02.560594   77929 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:02.561584   77929 kubeconfig.go:125] found "default-k8s-diff-port-856422" server: "https://192.168.61.206:8444"
	I0422 18:25:02.563656   77929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:02.575462   77929 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.206
	I0422 18:25:02.575507   77929 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:02.575522   77929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:02.575606   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.628012   77929 cri.go:89] found id: ""
	I0422 18:25:02.628080   77929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:02.645405   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:02.656723   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:02.656751   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:02.656814   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:25:02.667202   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:02.667269   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:02.678303   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:25:02.688600   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:02.688690   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:02.699963   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.710329   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:02.710393   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.721188   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:25:02.731964   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:02.732040   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:02.743541   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:02.755030   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:02.870301   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:03.995375   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125032803s)
	I0422 18:25:03.995447   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.230252   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.302979   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.395038   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:04.395115   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:03.502023   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.353882   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:04.353905   77634 pod_ready.go:81] duration metric: took 14.00823208s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:04.353915   77634 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:06.363356   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:08.363954   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.896176   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.396048   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.440071   77929 api_server.go:72] duration metric: took 1.045032787s to wait for apiserver process to appear ...
	I0422 18:25:05.440103   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:25:05.440148   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.759542   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.759577   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.759592   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.793255   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.793294   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.940652   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.945611   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:08.945646   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:09.440292   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.464743   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.464770   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:09.940470   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.946872   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.946900   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:10.441291   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:10.445834   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:25:10.452788   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:25:10.452814   77929 api_server.go:131] duration metric: took 5.012704724s to wait for apiserver health ...
	I0422 18:25:10.452823   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:10.452828   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:10.454695   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:25:10.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:13.361234   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:10.456234   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:25:10.469460   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:25:10.510297   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:25:10.527988   77929 system_pods.go:59] 8 kube-system pods found
	I0422 18:25:10.528034   77929 system_pods.go:61] "coredns-7db6d8ff4d-w968m" [1372c3d4-cb23-4f33-911b-57876688fcd4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:25:10.528044   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [af6c3f45-494d-469b-95e0-3d0842d07a70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:25:10.528051   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [665925b4-3073-41c2-86c0-12186f079459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:25:10.528057   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [e8661b67-89c5-43a6-b66e-828f637942e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:25:10.528061   77929 system_pods.go:61] "kube-proxy-4xvx2" [0e662ebe-1f6f-48fe-86c7-595b0bfa4bb6] Running
	I0422 18:25:10.528066   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [e6101593-2ee5-4765-b129-33b3ed7d4c98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:25:10.528075   77929 system_pods.go:61] "metrics-server-569cc877fc-l5qqw" [85eab808-f1f0-4fbc-9c54-1ae307226243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:25:10.528079   77929 system_pods.go:61] "storage-provisioner" [ba8465de-babc-4496-809f-68f6ec917ce8] Running
	I0422 18:25:10.528095   77929 system_pods.go:74] duration metric: took 17.768241ms to wait for pod list to return data ...
	I0422 18:25:10.528104   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:25:10.539169   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:25:10.539202   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:25:10.539214   77929 node_conditions.go:105] duration metric: took 11.105847ms to run NodePressure ...
	I0422 18:25:10.539237   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:10.808687   77929 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:25:10.815993   77929 kubeadm.go:733] kubelet initialised
	I0422 18:25:10.816025   77929 kubeadm.go:734] duration metric: took 7.302574ms waiting for restarted kubelet to initialise ...
	I0422 18:25:10.816037   77929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:25:10.824257   77929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:12.837255   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:16.704684   77400 start.go:364] duration metric: took 53.475319353s to acquireMachinesLock for "no-preload-407991"
	I0422 18:25:16.704741   77400 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:25:16.704752   77400 fix.go:54] fixHost starting: 
	I0422 18:25:16.705132   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:25:16.705166   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:25:16.721711   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0422 18:25:16.722127   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:25:16.722671   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:25:16.722693   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:25:16.723022   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:25:16.723220   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:16.723426   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:25:16.725197   77400 fix.go:112] recreateIfNeeded on no-preload-407991: state=Stopped err=<nil>
	I0422 18:25:16.725231   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	W0422 18:25:16.725430   77400 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:25:16.727275   77400 out.go:177] * Restarting existing kvm2 VM for "no-preload-407991" ...
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:15.362667   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:17.861622   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:14.831683   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:14.831706   77929 pod_ready.go:81] duration metric: took 4.007420508s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:14.831715   77929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343025   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:16.343056   77929 pod_ready.go:81] duration metric: took 1.511333532s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343070   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351244   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:17.351267   77929 pod_ready.go:81] duration metric: took 1.008189798s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351280   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:19.365025   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:16.728745   77400 main.go:141] libmachine: (no-preload-407991) Calling .Start
	I0422 18:25:16.728946   77400 main.go:141] libmachine: (no-preload-407991) Ensuring networks are active...
	I0422 18:25:16.729604   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network default is active
	I0422 18:25:16.729979   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network mk-no-preload-407991 is active
	I0422 18:25:16.730458   77400 main.go:141] libmachine: (no-preload-407991) Getting domain xml...
	I0422 18:25:16.731314   77400 main.go:141] libmachine: (no-preload-407991) Creating domain...
	I0422 18:25:18.079763   77400 main.go:141] libmachine: (no-preload-407991) Waiting to get IP...
	I0422 18:25:18.080862   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.081371   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.081401   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.081340   79353 retry.go:31] will retry after 226.494122ms: waiting for machine to come up
	I0422 18:25:18.309499   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.309914   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.310019   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.309900   79353 retry.go:31] will retry after 375.374338ms: waiting for machine to come up
	I0422 18:25:18.686507   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.687064   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.687093   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.687018   79353 retry.go:31] will retry after 341.714326ms: waiting for machine to come up
	I0422 18:25:19.030772   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.031261   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.031290   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.031229   79353 retry.go:31] will retry after 388.101939ms: waiting for machine to come up
	I0422 18:25:19.420994   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.421478   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.421500   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.421397   79353 retry.go:31] will retry after 732.485222ms: waiting for machine to come up
	I0422 18:25:20.155887   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:20.156717   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:20.156750   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:20.156665   79353 retry.go:31] will retry after 950.207106ms: waiting for machine to come up
	I0422 18:25:19.878966   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.364111   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:21.859384   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.362519   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.362552   77929 pod_ready.go:81] duration metric: took 5.011264858s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.362566   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371087   77929 pod_ready.go:92] pod "kube-proxy-4xvx2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.371112   77929 pod_ready.go:81] duration metric: took 8.534129ms for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371142   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376156   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.376183   77929 pod_ready.go:81] duration metric: took 5.03143ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376196   77929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:24.385435   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:21.108444   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:21.109056   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:21.109082   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:21.109004   79353 retry.go:31] will retry after 958.250136ms: waiting for machine to come up
	I0422 18:25:22.069541   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:22.070120   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:22.070144   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:22.070036   79353 retry.go:31] will retry after 989.607679ms: waiting for machine to come up
	I0422 18:25:23.061351   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:23.061877   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:23.061908   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:23.061823   79353 retry.go:31] will retry after 1.451989455s: waiting for machine to come up
	I0422 18:25:24.515233   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:24.515730   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:24.515755   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:24.515686   79353 retry.go:31] will retry after 2.303903602s: waiting for machine to come up
	I0422 18:25:24.365508   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.861066   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.389132   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:28.883625   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:26.821539   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:26.822006   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:26.822033   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:26.821950   79353 retry.go:31] will retry after 1.870697225s: waiting for machine to come up
	I0422 18:25:28.695072   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:28.695420   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:28.695466   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:28.695386   79353 retry.go:31] will retry after 2.327485176s: waiting for machine to come up
	I0422 18:25:28.861976   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:31.361339   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.883801   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:33.389422   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.024382   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:31.024817   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:31.024845   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:31.024786   79353 retry.go:31] will retry after 2.767538103s: waiting for machine to come up
	I0422 18:25:33.794390   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:33.794834   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:33.794872   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:33.794808   79353 retry.go:31] will retry after 5.661373675s: waiting for machine to come up
	I0422 18:25:33.860276   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.861770   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:38.361316   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.883098   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:37.883749   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.457864   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458407   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has current primary IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458447   77400 main.go:141] libmachine: (no-preload-407991) Found IP for machine: 192.168.39.164
	I0422 18:25:39.458492   77400 main.go:141] libmachine: (no-preload-407991) Reserving static IP address...
	I0422 18:25:39.458954   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.458980   77400 main.go:141] libmachine: (no-preload-407991) DBG | skip adding static IP to network mk-no-preload-407991 - found existing host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"}
	I0422 18:25:39.458992   77400 main.go:141] libmachine: (no-preload-407991) Reserved static IP address: 192.168.39.164
	I0422 18:25:39.459012   77400 main.go:141] libmachine: (no-preload-407991) Waiting for SSH to be available...
	I0422 18:25:39.459027   77400 main.go:141] libmachine: (no-preload-407991) DBG | Getting to WaitForSSH function...
	I0422 18:25:39.461404   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461715   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.461746   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461875   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH client type: external
	I0422 18:25:39.461906   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa (-rw-------)
	I0422 18:25:39.461956   77400 main.go:141] libmachine: (no-preload-407991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:39.461974   77400 main.go:141] libmachine: (no-preload-407991) DBG | About to run SSH command:
	I0422 18:25:39.461992   77400 main.go:141] libmachine: (no-preload-407991) DBG | exit 0
	I0422 18:25:39.591446   77400 main.go:141] libmachine: (no-preload-407991) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:39.591795   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetConfigRaw
	I0422 18:25:39.592473   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.594928   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595379   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.595414   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595632   77400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:25:39.595890   77400 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:39.595914   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:39.596103   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.598532   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.598899   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.598929   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.599071   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.599270   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599450   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599592   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.599728   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.599927   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.599942   77400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:39.712043   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:39.712081   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712336   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:25:39.712363   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712548   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.715474   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.715936   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.715960   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.716089   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.716265   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716396   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716530   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.716656   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.716860   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.716874   77400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407991 && echo "no-preload-407991" | sudo tee /etc/hostname
	I0422 18:25:39.845921   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407991
	
	I0422 18:25:39.845959   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.848790   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849093   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.849121   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849288   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.849495   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849638   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849817   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.850014   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.850183   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.850200   77400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407991/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:39.977389   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:39.977427   77400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:39.977447   77400 buildroot.go:174] setting up certificates
	I0422 18:25:39.977456   77400 provision.go:84] configureAuth start
	I0422 18:25:39.977468   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.977754   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.980800   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981266   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.981305   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981458   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.984031   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984478   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.984510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984654   77400 provision.go:143] copyHostCerts
	I0422 18:25:39.984713   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:39.984725   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:39.984788   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:39.984907   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:39.984918   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:39.984952   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:39.985038   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:39.985048   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:39.985076   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:39.985158   77400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.no-preload-407991 san=[127.0.0.1 192.168.39.164 localhost minikube no-preload-407991]
	I0422 18:25:40.224235   77400 provision.go:177] copyRemoteCerts
	I0422 18:25:40.224306   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:40.224352   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.227355   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.227814   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.227842   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.228035   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.228232   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.228392   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.228560   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.318916   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:40.346168   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:40.371490   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:25:40.396866   77400 provision.go:87] duration metric: took 419.381117ms to configureAuth
	I0422 18:25:40.396899   77400 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:40.397067   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:25:40.397130   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.399642   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400060   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.400095   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400269   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.400466   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400652   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400832   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.401018   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.401176   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.401191   77400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:40.698107   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:40.698140   77400 machine.go:97] duration metric: took 1.102235221s to provisionDockerMachine
	I0422 18:25:40.698153   77400 start.go:293] postStartSetup for "no-preload-407991" (driver="kvm2")
	I0422 18:25:40.698171   77400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:40.698187   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.698497   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:40.698532   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.701545   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.701933   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.701964   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.702070   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.702295   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.702492   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.702727   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.800538   77400 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:40.805027   77400 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:40.805060   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:40.805133   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:40.805216   77400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:40.805304   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:40.816872   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:40.843857   77400 start.go:296] duration metric: took 145.69044ms for postStartSetup
	I0422 18:25:40.843896   77400 fix.go:56] duration metric: took 24.13914409s for fixHost
	I0422 18:25:40.843914   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.846770   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847148   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.847184   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847391   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.847605   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847778   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847966   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.848199   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.848382   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.848396   77400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:40.964440   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810340.939149386
	
	I0422 18:25:40.964473   77400 fix.go:216] guest clock: 1713810340.939149386
	I0422 18:25:40.964483   77400 fix.go:229] Guest: 2024-04-22 18:25:40.939149386 +0000 UTC Remote: 2024-04-22 18:25:40.843899302 +0000 UTC m=+360.205454093 (delta=95.250084ms)
	I0422 18:25:40.964508   77400 fix.go:200] guest clock delta is within tolerance: 95.250084ms
	I0422 18:25:40.964513   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 24.259798286s
	I0422 18:25:40.964535   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.964813   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:40.967510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.967906   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.967932   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.968087   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968610   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968782   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968866   77400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:40.968910   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.969047   77400 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:40.969074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.971818   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972039   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972190   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972203   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972394   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972565   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972580   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972594   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972733   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972791   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.972875   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972948   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.973062   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.973206   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:41.092004   77400 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:41.098574   77400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:41.242800   77400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:41.250454   77400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:41.250521   77400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:41.267380   77400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:41.267408   77400 start.go:494] detecting cgroup driver to use...
	I0422 18:25:41.267478   77400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:41.284742   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:41.299527   77400 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:41.299596   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:41.314189   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:41.329444   77400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:41.456719   77400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:41.628305   77400 docker.go:233] disabling docker service ...
	I0422 18:25:41.628376   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:41.643226   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:41.657578   77400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:41.780449   77400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:41.898823   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:41.913578   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:41.933621   77400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:25:41.933679   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.944309   77400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:41.944382   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.955308   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.966445   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.977509   77400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:41.989479   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.001915   77400 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.020554   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.033225   77400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:42.044177   77400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:42.044231   77400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:42.060403   77400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:42.071760   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:42.213747   77400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:42.361818   77400 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:42.361911   77400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:42.367211   77400 start.go:562] Will wait 60s for crictl version
	I0422 18:25:42.367265   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.371042   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:42.408686   77400 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:42.408773   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.438447   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.469117   77400 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:25:40.862849   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.361826   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:39.884361   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:41.885199   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.885865   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.470665   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:42.473467   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.473845   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:42.473871   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.474121   77400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:42.478401   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:42.491034   77400 kubeadm.go:877] updating cluster {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:42.491163   77400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:25:42.491203   77400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:42.530418   77400 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:25:42.530443   77400 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.530585   77400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.530641   77400 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0422 18:25:42.530601   77400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.530609   77400 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.530622   77400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.530626   77400 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532108   77400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.532136   77400 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0422 18:25:42.532111   77400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.532113   77400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.532175   77400 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532197   77400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.532223   77400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.532506   77400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.735366   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.750777   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0422 18:25:42.758260   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.759633   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.763447   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.765416   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.803799   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.832904   77400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0422 18:25:42.832959   77400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.833021   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981471   77400 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0422 18:25:42.981528   77400 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.981553   77400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0422 18:25:42.981584   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981592   77400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.981635   77400 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0422 18:25:42.981663   77400 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.981687   77400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0422 18:25:42.981699   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981642   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981716   77400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.981770   77400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0422 18:25:42.981776   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981788   77400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.981820   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981846   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:43.021364   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0422 18:25:43.021416   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:43.021455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.021460   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:43.021529   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:43.021534   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:43.021585   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:43.130300   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0422 18:25:43.130373   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0422 18:25:43.130408   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:43.130425   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0422 18:25:43.130455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:43.130514   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:43.134769   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0422 18:25:43.134785   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0422 18:25:43.134797   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134839   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134853   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:43.134882   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0422 18:25:43.134959   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:43.142273   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0422 18:25:43.142486   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0422 18:25:43.142837   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0422 18:25:43.840108   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210614   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.075740127s)
	I0422 18:25:45.210650   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0422 18:25:45.210655   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.075789371s)
	I0422 18:25:45.210676   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0422 18:25:45.210693   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.075715404s)
	I0422 18:25:45.210699   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210706   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0422 18:25:45.210748   77400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.370610047s)
	I0422 18:25:45.210785   77400 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0422 18:25:45.210750   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210842   77400 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210969   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:45.363082   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:47.861802   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:46.383938   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:48.385209   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.203063   77400 ssh_runner.go:235] Completed: which crictl: (2.992066474s)
	I0422 18:25:48.203106   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.992228832s)
	I0422 18:25:48.203143   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0422 18:25:48.203159   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:48.203171   77400 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:48.203210   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:49.863963   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:52.370507   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.883608   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:53.386229   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.419429   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.216195193s)
	I0422 18:25:52.419462   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0422 18:25:52.419474   77400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.216288559s)
	I0422 18:25:52.419488   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419513   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0422 18:25:52.419537   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419581   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:52.424638   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0422 18:25:53.873720   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.454157304s)
	I0422 18:25:53.873750   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0422 18:25:53.873780   77400 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:53.873825   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:54.860810   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:56.864272   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.388103   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:57.887970   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.955181   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.081308071s)
	I0422 18:25:55.955210   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0422 18:25:55.955236   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:55.955300   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:58.218734   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.263410883s)
	I0422 18:25:58.218762   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0422 18:25:58.218792   77400 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:58.218843   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:59.071398   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0422 18:25:59.071443   77400 cache_images.go:123] Successfully loaded all cached images
	I0422 18:25:59.071450   77400 cache_images.go:92] duration metric: took 16.54097573s to LoadCachedImages
	I0422 18:25:59.071463   77400 kubeadm.go:928] updating node { 192.168.39.164 8443 v1.30.0 crio true true} ...
	I0422 18:25:59.071610   77400 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-407991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:59.071698   77400 ssh_runner.go:195] Run: crio config
	I0422 18:25:59.125757   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:25:59.125783   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:59.125800   77400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:59.125832   77400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-407991 NodeName:no-preload-407991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:59.126001   77400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-407991"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:59.126073   77400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:59.137254   77400 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:59.137320   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:59.146983   77400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0422 18:25:59.165207   77400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:59.182898   77400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0422 18:25:59.201735   77400 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:59.206108   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:59.219642   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:59.336565   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:59.356844   77400 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991 for IP: 192.168.39.164
	I0422 18:25:59.356873   77400 certs.go:194] generating shared ca certs ...
	I0422 18:25:59.356893   77400 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:59.357058   77400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:59.357121   77400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:59.357133   77400 certs.go:256] generating profile certs ...
	I0422 18:25:59.357209   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.key
	I0422 18:25:59.357329   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key.6aa1268b
	I0422 18:25:59.357413   77400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key
	I0422 18:25:59.357574   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:59.357616   77400 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:59.357631   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:59.357672   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:59.357707   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:59.357745   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:59.357823   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:59.358765   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:59.395982   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:59.430445   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:59.465415   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:59.502678   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:25:59.538225   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:25:59.570635   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:59.596096   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:59.622051   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:59.647372   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:59.673650   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:59.699515   77400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:59.717253   77400 ssh_runner.go:195] Run: openssl version
	I0422 18:25:59.723704   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:59.735265   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740264   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740319   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.746445   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:59.757879   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:59.769243   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774505   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774562   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.780572   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:59.793472   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:59.805187   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810148   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810191   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.816350   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:59.828208   77400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:59.832799   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:59.838952   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:59.845145   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:59.851309   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:59.857643   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:59.864892   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:59.873625   77400 kubeadm.go:391] StartCluster: {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:59.873749   77400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:59.873826   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.913578   77400 cri.go:89] found id: ""
	I0422 18:25:59.913656   77400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:59.925105   77400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:59.925131   77400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:59.925138   77400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:59.925192   77400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:59.935942   77400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:59.937363   77400 kubeconfig.go:125] found "no-preload-407991" server: "https://192.168.39.164:8443"
	I0422 18:25:59.939672   77400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:59.949774   77400 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.164
	I0422 18:25:59.949810   77400 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:59.949841   77400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:59.949896   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.989385   77400 cri.go:89] found id: ""
	I0422 18:25:59.989443   77400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:26:00.005985   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:26:00.016873   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:26:00.016897   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:26:00.016953   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:26:00.027119   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:26:00.027205   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:26:00.038360   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:26:00.048176   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:26:00.048246   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:26:00.058861   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.068955   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:26:00.069018   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.079147   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:26:00.089400   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:26:00.089477   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:26:00.100245   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:26:00.111040   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:00.224436   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:59.362215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:01.860196   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.388433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:02.883211   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.838456   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.057201   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.143346   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.294896   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:26:01.295031   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.795945   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.296085   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.324434   77400 api_server.go:72] duration metric: took 1.029539423s to wait for apiserver process to appear ...
	I0422 18:26:02.324467   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:26:02.324490   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.784948   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:26:04.784984   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:26:04.784997   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.844019   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.844064   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:04.844084   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.848805   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.848838   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.325458   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.332351   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.332410   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.824785   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.830293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.830318   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:06.325380   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:06.332804   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:26:06.344083   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:26:06.344110   77400 api_server.go:131] duration metric: took 4.019636154s to wait for apiserver health ...
	I0422 18:26:06.344118   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:26:06.344123   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:26:06.345875   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:26:03.863020   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:06.360428   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:04.884648   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:07.382356   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:09.388391   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.347812   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:26:06.361087   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:26:06.385654   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:26:06.398331   77400 system_pods.go:59] 8 kube-system pods found
	I0422 18:26:06.398372   77400 system_pods.go:61] "coredns-7db6d8ff4d-2p2sr" [3f42ce46-e76d-4bc8-9dd5-463a08948e4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:26:06.398384   77400 system_pods.go:61] "etcd-no-preload-407991" [96ae7feb-802f-44a8-81fc-5ea5de12e73b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:26:06.398396   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [28010e33-49a1-4c6b-90f9-939ede3ed97e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:26:06.398404   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [1e7db029-2196-499f-bc88-d780d065f80c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:26:06.398415   77400 system_pods.go:61] "kube-proxy-767q4" [1c6d01b0-caf0-4d52-8da8-caad7b158012] Running
	I0422 18:26:06.398426   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [3ef8d145-d90e-455d-98fe-de9e6080a178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:26:06.398433   77400 system_pods.go:61] "metrics-server-569cc877fc-jmjhm" [d831b01b-af2e-4c7f-944c-e768d724ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:26:06.398439   77400 system_pods.go:61] "storage-provisioner" [db8196df-a394-4e10-9db7-c10414833af3] Running
	I0422 18:26:06.398447   77400 system_pods.go:74] duration metric: took 12.770066ms to wait for pod list to return data ...
	I0422 18:26:06.398455   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:26:06.402125   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:26:06.402158   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:26:06.402170   77400 node_conditions.go:105] duration metric: took 3.709194ms to run NodePressure ...
	I0422 18:26:06.402195   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:06.676133   77400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680247   77400 kubeadm.go:733] kubelet initialised
	I0422 18:26:06.680269   77400 kubeadm.go:734] duration metric: took 4.114413ms waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680276   77400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:26:06.687275   77400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.693967   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.693986   77400 pod_ready.go:81] duration metric: took 6.687466ms for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.694004   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.694012   77400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.698539   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698562   77400 pod_ready.go:81] duration metric: took 4.539271ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.698571   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698578   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.703382   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703407   77400 pod_ready.go:81] duration metric: took 4.822601ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.703418   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703425   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.789413   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789449   77400 pod_ready.go:81] duration metric: took 86.014056ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.789459   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789465   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189544   77400 pod_ready.go:92] pod "kube-proxy-767q4" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:07.189572   77400 pod_ready.go:81] duration metric: took 400.096716ms for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189585   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:09.201757   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:08.861714   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.359820   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.362303   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.883726   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:14.382966   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.697196   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.697458   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.861378   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:17.861808   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:16.385523   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:18.883000   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.697079   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:15.697104   77400 pod_ready.go:81] duration metric: took 8.507511233s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:15.697116   77400 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:17.704095   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.204276   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.360946   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:22.861202   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.883107   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:23.383119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.204697   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.703926   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.861797   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.361089   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.384161   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.386172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:27.204128   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.207057   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.364913   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.883085   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.883536   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.883927   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:31.704123   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:34.206443   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.863261   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.360525   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.361132   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.385507   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.882649   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:36.704473   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:39.204839   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.863837   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.362000   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.887004   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.384351   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:41.704418   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.704878   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.362073   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.860279   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.385285   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.882420   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:45.706474   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:48.205681   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.860748   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.360586   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.884553   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:51.885527   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.387502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:50.703624   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.704794   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.204187   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.861314   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:57.360430   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:56.882974   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:58.884770   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:57.703613   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:00.205449   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:59.861376   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.862613   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.386313   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:03.883728   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:02.205610   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.704158   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.359865   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:06.359968   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.360935   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:05.884072   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.386493   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:07.203802   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:09.204754   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.862854   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.361215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.883779   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:12.883989   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.205525   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.706785   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:15.861009   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.861214   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.884371   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.384911   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:16.204000   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:18.704622   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.864836   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:22.360984   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.883393   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:21.885743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.384476   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.704821   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:23.203548   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:25.204601   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.361665   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.361708   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.388979   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.882505   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:27.714190   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.204420   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.861728   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:31.360750   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.360969   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.883051   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.386563   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:32.704424   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:34.705215   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.861184   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.361048   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.883602   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.383601   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:37.203082   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.204563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.860739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.861544   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.385271   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.389273   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:41.705615   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.203947   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.361656   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:47.860929   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.882684   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:46.886119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:49.382017   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:46.703083   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:48.705413   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:50.361465   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:52.364509   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.384561   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.882657   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.203623   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.205834   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.863919   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.360796   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:56.383743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:58.383783   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.703110   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.704061   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.704421   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.361229   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:01.362273   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.385750   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:02.886349   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:02.203877   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:04.204896   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:03.860506   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.860713   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.384180   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.884714   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:06.206560   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.703406   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.364348   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.861760   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.361127   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.392330   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:12.883081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:10.705202   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.203760   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.205898   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.361423   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:17.363341   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:14.883352   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.886433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.382478   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:17.703917   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.704958   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.861537   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.862949   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.882158   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:23.885280   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.204356   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.703765   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.361251   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:26.862825   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.886291   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:28.382905   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.705590   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.205641   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.361439   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:31.361534   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:30.383502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.887006   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:31.704145   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.704841   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.860769   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.860833   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.861583   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.382871   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.383164   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:36.204210   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:38.205076   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:40.360574   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.862692   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:39.882407   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.882939   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:43.883102   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:40.703806   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.704426   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.707600   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.361890   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.860686   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.883300   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.884863   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:47.207153   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.704440   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.863696   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.360739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.384172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.386468   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.704563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.206032   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.362075   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.861096   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.883667   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:57.386081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:56.709043   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.203924   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:58.861932   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.360398   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.360867   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.882692   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.883267   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.383872   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:01.204827   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.204916   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.355065   77634 pod_ready.go:81] duration metric: took 4m0.0011361s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:04.355113   77634 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:04.355148   77634 pod_ready.go:38] duration metric: took 4m14.498231749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:04.355180   77634 kubeadm.go:591] duration metric: took 4m21.764385121s to restartPrimaryControlPlane
	W0422 18:29:04.355236   77634 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:04.355261   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:06.385395   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:08.883604   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:05.703491   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:07.704534   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.705814   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:11.383569   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:13.883521   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:11.706608   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:14.204779   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:16.384284   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.884060   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.704652   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.704899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:21.391438   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:22.376750   77929 pod_ready.go:81] duration metric: took 4m0.000534542s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:22.376787   77929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:22.376811   77929 pod_ready.go:38] duration metric: took 4m11.560762914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:22.376844   77929 kubeadm.go:591] duration metric: took 4m19.827120959s to restartPrimaryControlPlane
	W0422 18:29:22.376929   77929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:22.376953   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.203699   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:23.204704   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:25.206832   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:27.706178   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:30.203899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:29:32.204107   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:34.707228   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:36.541281   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.185998541s)
	I0422 18:29:36.541367   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:36.558729   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:36.569635   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:36.579901   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:36.579919   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:36.579959   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:36.589540   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:36.589602   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:36.600704   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:36.610945   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:36.611012   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:36.621316   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.631251   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:36.631305   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.641661   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:36.650970   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:36.651049   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:36.661012   77634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:36.717676   77634 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:36.717771   77634 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:36.861264   77634 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:36.861404   77634 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:36.861534   77634 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:37.083032   77634 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:37.084958   77634 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:37.085069   77634 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:37.085179   77634 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:37.085296   77634 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:37.085387   77634 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:37.085505   77634 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:37.085579   77634 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:37.085665   77634 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:37.085748   77634 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:37.085869   77634 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:37.085985   77634 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:37.086037   77634 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:37.086114   77634 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:37.337747   77634 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:37.538036   77634 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:37.630303   77634 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:37.755713   77634 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:38.081451   77634 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:38.082265   77634 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:38.084958   77634 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:38.086755   77634 out.go:204]   - Booting up control plane ...
	I0422 18:29:38.086893   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:38.087023   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:38.089714   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:38.108313   77634 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:38.108786   77634 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:38.108849   77634 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:38.241537   77634 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:38.241681   77634 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:37.203550   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:39.205619   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:38.743798   77634 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.847818ms
	I0422 18:29:38.743910   77634 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:44.245440   77634 kubeadm.go:309] [api-check] The API server is healthy after 5.501913498s
	I0422 18:29:44.265283   77634 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:29:44.280940   77634 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:29:44.318688   77634 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:29:44.318990   77634 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-782377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:29:44.332201   77634 kubeadm.go:309] [bootstrap-token] Using token: o52gh5.f6sjmkidroy1sl61
	I0422 18:29:44.333546   77634 out.go:204]   - Configuring RBAC rules ...
	I0422 18:29:44.333670   77634 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:29:44.342847   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:29:44.350983   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:29:44.354214   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:29:44.361351   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:29:44.365170   77634 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:29:44.654414   77634 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:29:45.170247   77634 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:29:45.654714   77634 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:29:45.654744   77634 kubeadm.go:309] 
	I0422 18:29:45.654847   77634 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:29:45.654871   77634 kubeadm.go:309] 
	I0422 18:29:45.654984   77634 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:29:45.654996   77634 kubeadm.go:309] 
	I0422 18:29:45.655028   77634 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:29:45.655108   77634 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:29:45.655201   77634 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:29:45.655211   77634 kubeadm.go:309] 
	I0422 18:29:45.655308   77634 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:29:45.655317   77634 kubeadm.go:309] 
	I0422 18:29:45.655395   77634 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:29:45.655414   77634 kubeadm.go:309] 
	I0422 18:29:45.655486   77634 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:29:45.655597   77634 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:29:45.655700   77634 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:29:45.655714   77634 kubeadm.go:309] 
	I0422 18:29:45.655824   77634 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:29:45.655951   77634 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:29:45.655963   77634 kubeadm.go:309] 
	I0422 18:29:45.656067   77634 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656226   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:29:45.656258   77634 kubeadm.go:309] 	--control-plane 
	I0422 18:29:45.656265   77634 kubeadm.go:309] 
	I0422 18:29:45.656383   77634 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:29:45.656394   77634 kubeadm.go:309] 
	I0422 18:29:45.656513   77634 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656602   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:29:45.657124   77634 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:29:45.657152   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:29:45.657168   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:29:45.658873   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:29:41.705450   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:44.205661   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:45.660184   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:29:45.671834   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:29:45.693947   77634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:29:45.694034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:45.694054   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-782377 minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=embed-certs-782377 minikube.k8s.io/primary=true
	I0422 18:29:45.901437   77634 ops.go:34] apiserver oom_adj: -16
	I0422 18:29:45.901443   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.402050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.902222   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.402527   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.901535   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.206598   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.703899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.401738   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:48.902497   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.402046   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.901756   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.402023   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.901600   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.401905   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.901739   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.401859   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.902155   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.661872   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.28489375s)
	I0422 18:29:54.661952   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:54.679790   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:54.689947   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:54.700173   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:54.700191   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:54.700230   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:29:54.711462   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:54.711519   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:54.721157   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:29:54.730698   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:54.730769   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:54.740596   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.750450   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:54.750521   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.760582   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:29:54.770551   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:54.770608   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:54.781181   77929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:54.834872   77929 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:54.834950   77929 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:54.982435   77929 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:54.982574   77929 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:54.982675   77929 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:55.208724   77929 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:50.704498   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:53.203270   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.206485   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.210946   77929 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:55.211072   77929 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:55.211180   77929 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:55.211326   77929 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:55.211425   77929 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:55.211546   77929 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:55.211655   77929 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:55.211746   77929 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:55.211831   77929 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:55.211932   77929 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:55.212028   77929 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:55.212076   77929 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:55.212150   77929 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:55.456090   77929 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:55.747103   77929 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:55.940962   77929 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:56.076850   77929 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:56.253326   77929 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:56.253921   77929 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:56.259311   77929 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:53.402196   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:53.902328   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.402353   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.901736   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.401514   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.902415   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.402371   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.902117   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.401817   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.902050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.402034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.574005   77634 kubeadm.go:1107] duration metric: took 12.880033802s to wait for elevateKubeSystemPrivileges
	W0422 18:29:58.574051   77634 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:29:58.574061   77634 kubeadm.go:393] duration metric: took 5m16.036878933s to StartCluster
	I0422 18:29:58.574083   77634 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.574173   77634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:29:58.576621   77634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.576908   77634 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:29:58.578444   77634 out.go:177] * Verifying Kubernetes components...
	I0422 18:29:58.576967   77634 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:29:58.577120   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:29:58.579836   77634 addons.go:69] Setting default-storageclass=true in profile "embed-certs-782377"
	I0422 18:29:58.579846   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:29:58.579850   77634 addons.go:69] Setting metrics-server=true in profile "embed-certs-782377"
	I0422 18:29:58.579873   77634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-782377"
	I0422 18:29:58.579896   77634 addons.go:234] Setting addon metrics-server=true in "embed-certs-782377"
	W0422 18:29:58.579910   77634 addons.go:243] addon metrics-server should already be in state true
	I0422 18:29:58.579952   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.579841   77634 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-782377"
	I0422 18:29:58.580057   77634 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-782377"
	W0422 18:29:58.580070   77634 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:29:58.580099   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.580279   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580284   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580301   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580308   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580460   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580488   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.603276   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0422 18:29:58.603459   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0422 18:29:58.603483   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 18:29:58.607248   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607265   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607392   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607836   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.607853   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.607983   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.608001   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.608344   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608373   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608505   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.608932   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.608963   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612034   77634 addons.go:234] Setting addon default-storageclass=true in "embed-certs-782377"
	W0422 18:29:58.612056   77634 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:29:58.612084   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.612467   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.612485   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612786   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.612802   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.613185   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.613700   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.613728   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.630170   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0422 18:29:58.630586   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.631061   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.631081   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.631523   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.631693   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.631847   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0422 18:29:58.632457   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.632941   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.632966   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.633179   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0422 18:29:58.633322   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.633567   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.633688   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.635830   77634 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:29:58.633856   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.634354   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.636961   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.637004   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:29:58.637027   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:29:58.637045   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.637006   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.637294   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.637508   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.639287   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.640999   77634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:29:58.640236   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:56.261447   77929 out.go:204]   - Booting up control plane ...
	I0422 18:29:56.261539   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:56.261635   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:56.261736   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:56.285519   77929 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:56.285675   77929 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:56.285752   77929 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:56.437635   77929 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:56.437767   77929 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:56.944001   77929 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 506.500244ms
	I0422 18:29:56.944104   77929 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:58.640741   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.642428   77634 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.641034   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.642448   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:29:58.642456   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.642470   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.642590   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.642733   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.642860   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.645684   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646424   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.646469   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646728   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.646929   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.647079   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.647331   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.657385   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 18:29:58.658062   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.658658   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.658676   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.659065   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.659314   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.661001   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.661274   77634 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:58.661292   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:29:58.661309   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.664551   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.665029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665185   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.665397   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.665560   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.665692   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.840086   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:29:58.872963   77634 node_ready.go:35] waiting up to 6m0s for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882942   77634 node_ready.go:49] node "embed-certs-782377" has status "Ready":"True"
	I0422 18:29:58.882978   77634 node_ready.go:38] duration metric: took 9.978929ms for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882990   77634 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:58.892484   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:29:58.964679   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.987690   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:59.001748   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:29:59.001776   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:29:59.095009   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:29:59.095039   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:29:59.242427   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.242451   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:29:59.321464   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.989825   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025095721s)
	I0422 18:29:59.989883   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.989895   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.989828   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.002098611s)
	I0422 18:29:59.989974   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990005   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990193   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990231   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990239   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990247   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990254   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990306   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990341   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990355   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990369   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990380   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990504   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990523   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990572   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990588   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.025628   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.025655   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.025970   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.025991   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.434245   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.434287   77634 pod_ready.go:81] duration metric: took 1.54176792s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.434301   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454521   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.454545   77634 pod_ready.go:81] duration metric: took 20.235494ms for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454557   77634 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.473166   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.151631277s)
	I0422 18:30:00.473225   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473266   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473625   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.473660   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.473683   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.473706   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473719   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473998   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.474079   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.474098   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.474114   77634 addons.go:470] Verifying addon metrics-server=true in "embed-certs-782377"
	I0422 18:30:00.476224   77634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:29:57.706757   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.206098   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.477945   77634 addons.go:505] duration metric: took 1.900979481s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:30:00.493925   77634 pod_ready.go:92] pod "etcd-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.493956   77634 pod_ready.go:81] duration metric: took 39.391277ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.493971   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502733   77634 pod_ready.go:92] pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.502762   77634 pod_ready.go:81] duration metric: took 8.782315ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502776   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517227   77634 pod_ready.go:92] pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.517249   77634 pod_ready.go:81] duration metric: took 14.465418ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517260   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881221   77634 pod_ready.go:92] pod "kube-proxy-6qsdm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.881245   77634 pod_ready.go:81] duration metric: took 363.979231ms for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881254   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277017   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:01.277103   77634 pod_ready.go:81] duration metric: took 395.840808ms for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277125   77634 pod_ready.go:38] duration metric: took 2.394112246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:01.277153   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:01.277240   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:01.295278   77634 api_server.go:72] duration metric: took 2.718332063s to wait for apiserver process to appear ...
	I0422 18:30:01.295316   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:01.295345   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:30:01.299754   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:30:01.300888   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:01.300912   77634 api_server.go:131] duration metric: took 5.588825ms to wait for apiserver health ...
	I0422 18:30:01.300920   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:01.480184   77634 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:01.480216   77634 system_pods.go:61] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.480220   77634 system_pods.go:61] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.480224   77634 system_pods.go:61] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.480227   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.480231   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.480234   77634 system_pods.go:61] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.480237   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.480243   77634 system_pods.go:61] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.480246   77634 system_pods.go:61] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.480253   77634 system_pods.go:74] duration metric: took 179.327678ms to wait for pod list to return data ...
	I0422 18:30:01.480260   77634 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:01.676749   77634 default_sa.go:45] found service account: "default"
	I0422 18:30:01.676792   77634 default_sa.go:55] duration metric: took 196.525393ms for default service account to be created ...
	I0422 18:30:01.676805   77634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:01.881811   77634 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:01.881846   77634 system_pods.go:89] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.881852   77634 system_pods.go:89] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.881856   77634 system_pods.go:89] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.881861   77634 system_pods.go:89] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.881866   77634 system_pods.go:89] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.881871   77634 system_pods.go:89] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.881875   77634 system_pods.go:89] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.881884   77634 system_pods.go:89] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.881891   77634 system_pods.go:89] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.881902   77634 system_pods.go:126] duration metric: took 205.08856ms to wait for k8s-apps to be running ...
	I0422 18:30:01.881915   77634 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:01.881971   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:01.898653   77634 system_svc.go:56] duration metric: took 16.727076ms WaitForService to wait for kubelet
	I0422 18:30:01.898688   77634 kubeadm.go:576] duration metric: took 3.321747224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:01.898716   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:02.079527   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:02.079552   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:02.079567   77634 node_conditions.go:105] duration metric: took 180.844523ms to run NodePressure ...
	I0422 18:30:02.079581   77634 start.go:240] waiting for startup goroutines ...
	I0422 18:30:02.079590   77634 start.go:245] waiting for cluster config update ...
	I0422 18:30:02.079603   77634 start.go:254] writing updated cluster config ...
	I0422 18:30:02.079881   77634 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:02.131965   77634 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:02.133816   77634 out.go:177] * Done! kubectl is now configured to use "embed-certs-782377" cluster and "default" namespace by default
	I0422 18:30:02.446649   77929 kubeadm.go:309] [api-check] The API server is healthy after 5.502662802s
	I0422 18:30:02.466311   77929 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:02.504029   77929 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:02.586946   77929 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:02.587250   77929 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-856422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:02.600362   77929 kubeadm.go:309] [bootstrap-token] Using token: f03yx2.2vmzf4rav70vm6gm
	I0422 18:30:02.601830   77929 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:02.601961   77929 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:02.608688   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:02.621264   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:02.625695   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:02.630424   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:02.639203   77929 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:02.856167   77929 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:03.309505   77929 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:03.855419   77929 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:03.855443   77929 kubeadm.go:309] 
	I0422 18:30:03.855541   77929 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:03.855567   77929 kubeadm.go:309] 
	I0422 18:30:03.855643   77929 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:03.855653   77929 kubeadm.go:309] 
	I0422 18:30:03.855688   77929 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:03.855756   77929 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:03.855841   77929 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:03.855854   77929 kubeadm.go:309] 
	I0422 18:30:03.855909   77929 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:03.855915   77929 kubeadm.go:309] 
	I0422 18:30:03.855954   77929 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:03.855960   77929 kubeadm.go:309] 
	I0422 18:30:03.856051   77929 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:03.856171   77929 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:03.856248   77929 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:03.856259   77929 kubeadm.go:309] 
	I0422 18:30:03.856390   77929 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:03.856484   77929 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:03.856496   77929 kubeadm.go:309] 
	I0422 18:30:03.856636   77929 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.856729   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:03.856749   77929 kubeadm.go:309] 	--control-plane 
	I0422 18:30:03.856755   77929 kubeadm.go:309] 
	I0422 18:30:03.856823   77929 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:03.856829   77929 kubeadm.go:309] 
	I0422 18:30:03.856911   77929 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.857040   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:03.857540   77929 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:03.857569   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:30:03.857583   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:03.859350   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:03.860736   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:03.873189   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:03.897193   77929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:03.897260   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:03.897317   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-856422 minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=default-k8s-diff-port-856422 minikube.k8s.io/primary=true
	I0422 18:30:04.114339   77929 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:04.114499   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:02.703452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.705502   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.615355   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.115530   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.614776   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.114991   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.614772   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.114921   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.614799   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.115218   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.614688   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:09.114578   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.203762   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.704636   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.615201   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.115526   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.614511   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.115041   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.615220   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.115463   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.614937   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.115470   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.615417   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:14.114916   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:11.706452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.203931   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.614582   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.115466   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.615542   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.115554   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.614586   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.114645   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.614945   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.769793   77929 kubeadm.go:1107] duration metric: took 13.872592974s to wait for elevateKubeSystemPrivileges
	W0422 18:30:17.769857   77929 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:30:17.769869   77929 kubeadm.go:393] duration metric: took 5m15.279261637s to StartCluster
	I0422 18:30:17.769889   77929 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.769958   77929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:30:17.771921   77929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.772222   77929 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:30:17.774219   77929 out.go:177] * Verifying Kubernetes components...
	I0422 18:30:17.772365   77929 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:30:17.772496   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:30:17.776231   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:30:17.776249   77929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776267   77929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776294   77929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776307   77929 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:30:17.776321   77929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-856422"
	I0422 18:30:17.776343   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776284   77929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776412   77929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776430   77929 addons.go:243] addon metrics-server should already be in state true
	I0422 18:30:17.776469   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776775   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776809   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776778   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776846   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776777   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776926   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.796665   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0422 18:30:17.796701   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0422 18:30:17.796976   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0422 18:30:17.797083   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797472   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797609   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797795   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.797824   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798111   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798141   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798158   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798499   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798627   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798648   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798728   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.798776   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799001   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.799077   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.799107   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799274   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.803095   77929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.803141   77929 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:30:17.803175   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.803544   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.803580   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.820753   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0422 18:30:17.821272   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.821822   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.821839   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.822247   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.822315   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0422 18:30:17.822640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.823287   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0422 18:30:17.823373   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.823976   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.824141   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824152   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824479   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824498   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824561   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.824727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.825176   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.825646   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.825675   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.826014   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.828122   77929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:30:17.826808   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.829694   77929 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:17.829711   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:30:17.829729   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.831322   77929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:30:17.834942   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:30:17.834959   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:30:17.834979   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.833531   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.832894   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835054   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.835078   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.835468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.835674   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.837838   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838180   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.838204   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838459   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.838656   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.838827   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.838983   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.844804   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0422 18:30:17.845252   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.845762   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.845780   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.846071   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.846240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.847881   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.848127   77929 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:17.848142   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:30:17.848159   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.850959   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851369   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.851389   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.851786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.851918   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.852081   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.997608   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:30:18.066476   77929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.139937   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:18.141619   77929 node_ready.go:49] node "default-k8s-diff-port-856422" has status "Ready":"True"
	I0422 18:30:18.141645   77929 node_ready.go:38] duration metric: took 75.13675ms for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.141658   77929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:18.168289   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:18.217351   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:30:18.217374   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:30:18.280089   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:18.283704   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:30:18.283734   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:30:18.314907   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.314936   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:30:18.379950   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.595931   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.595969   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596350   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596374   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.596389   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596660   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596699   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596722   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610244   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.610277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.610614   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.610635   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610659   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:19.513892   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233747961s)
	I0422 18:30:19.513948   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.513961   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514326   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.514460   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.514491   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.514506   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514414   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517601   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.517617   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.805551   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425552646s)
	I0422 18:30:19.805610   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.805621   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.805986   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.806040   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.806064   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.806083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.807818   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.807865   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.807874   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.807889   77929 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-856422"
	I0422 18:30:19.809871   77929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0422 18:30:15.697614   77400 pod_ready.go:81] duration metric: took 4m0.000479463s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	E0422 18:30:15.697661   77400 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:30:15.697678   77400 pod_ready.go:38] duration metric: took 4m9.017394523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:15.697704   77400 kubeadm.go:591] duration metric: took 4m15.772560858s to restartPrimaryControlPlane
	W0422 18:30:15.697751   77400 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:30:15.697777   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:30:19.811644   77929 addons.go:505] duration metric: took 2.039289124s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0422 18:30:20.174912   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:20.675213   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.675247   77929 pod_ready.go:81] duration metric: took 2.506921343s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.675261   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681665   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.681690   77929 pod_ready.go:81] duration metric: took 6.421217ms for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681700   77929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687893   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.687926   77929 pod_ready.go:81] duration metric: took 6.218166ms for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687941   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696603   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.696634   77929 pod_ready.go:81] duration metric: took 8.684682ms for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696649   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702776   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.702800   77929 pod_ready.go:81] duration metric: took 6.141484ms for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702813   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073451   77929 pod_ready.go:92] pod "kube-proxy-4m8cm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.073485   77929 pod_ready.go:81] duration metric: took 370.663669ms for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073500   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474144   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.474175   77929 pod_ready.go:81] duration metric: took 400.665802ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474190   77929 pod_ready.go:38] duration metric: took 3.332515716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:21.474207   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:21.474273   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:21.491320   77929 api_server.go:72] duration metric: took 3.719060391s to wait for apiserver process to appear ...
	I0422 18:30:21.491352   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:21.491378   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:30:21.496589   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:30:21.497405   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:21.497426   77929 api_server.go:131] duration metric: took 6.067469ms to wait for apiserver health ...
	I0422 18:30:21.497433   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:21.675885   77929 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:21.675912   77929 system_pods.go:61] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:21.675916   77929 system_pods.go:61] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:21.675924   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:21.675928   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:21.675932   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:21.675935   77929 system_pods.go:61] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:21.675939   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:21.675945   77929 system_pods.go:61] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:21.675949   77929 system_pods.go:61] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:21.675959   77929 system_pods.go:74] duration metric: took 178.519985ms to wait for pod list to return data ...
	I0422 18:30:21.675965   77929 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:21.872358   77929 default_sa.go:45] found service account: "default"
	I0422 18:30:21.872382   77929 default_sa.go:55] duration metric: took 196.412252ms for default service account to be created ...
	I0422 18:30:21.872391   77929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:22.075660   77929 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:22.075689   77929 system_pods.go:89] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:22.075694   77929 system_pods.go:89] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:22.075698   77929 system_pods.go:89] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:22.075702   77929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:22.075706   77929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:22.075710   77929 system_pods.go:89] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:22.075714   77929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:22.075722   77929 system_pods.go:89] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:22.075726   77929 system_pods.go:89] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:22.075735   77929 system_pods.go:126] duration metric: took 203.339608ms to wait for k8s-apps to be running ...
	I0422 18:30:22.075742   77929 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:22.075785   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:22.091186   77929 system_svc.go:56] duration metric: took 15.433207ms WaitForService to wait for kubelet
	I0422 18:30:22.091219   77929 kubeadm.go:576] duration metric: took 4.318966383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:22.091237   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:22.272944   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:22.272971   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:22.272980   77929 node_conditions.go:105] duration metric: took 181.734735ms to run NodePressure ...
	I0422 18:30:22.272991   77929 start.go:240] waiting for startup goroutines ...
	I0422 18:30:22.273000   77929 start.go:245] waiting for cluster config update ...
	I0422 18:30:22.273010   77929 start.go:254] writing updated cluster config ...
	I0422 18:30:22.273248   77929 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:22.323725   77929 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:22.325876   77929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-856422" cluster and "default" namespace by default
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.109960   77400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.41215685s)
	I0422 18:30:48.110037   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:48.127246   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:30:48.138280   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:30:48.148521   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:30:48.148545   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:30:48.148588   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:30:48.160411   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:30:48.160483   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:30:48.170748   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:30:48.180399   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:30:48.180451   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:30:48.192521   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.202200   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:30:48.202274   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.212241   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:30:48.221754   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:30:48.221821   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:30:48.231555   77400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:30:48.456873   77400 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:57.943980   77400 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:30:57.944080   77400 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:30:57.944182   77400 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:30:57.944305   77400 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:30:57.944411   77400 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:30:57.944499   77400 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:30:57.946110   77400 out.go:204]   - Generating certificates and keys ...
	I0422 18:30:57.946192   77400 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:30:57.946262   77400 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:30:57.946385   77400 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:30:57.946464   77400 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:30:57.946559   77400 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:30:57.946683   77400 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:30:57.946772   77400 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:30:57.946835   77400 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:30:57.946902   77400 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:30:57.946963   77400 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:30:57.947000   77400 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:30:57.947054   77400 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:30:57.947116   77400 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:30:57.947201   77400 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:30:57.947283   77400 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:30:57.947383   77400 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:30:57.947458   77400 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:30:57.947589   77400 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:30:57.947662   77400 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:30:57.949092   77400 out.go:204]   - Booting up control plane ...
	I0422 18:30:57.949194   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:30:57.949279   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:30:57.949336   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:30:57.949419   77400 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:30:57.949505   77400 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:30:57.949544   77400 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:30:57.949664   77400 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:30:57.949739   77400 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:30:57.949794   77400 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.588061ms
	I0422 18:30:57.949862   77400 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:30:57.949957   77400 kubeadm.go:309] [api-check] The API server is healthy after 5.510546703s
	I0422 18:30:57.950048   77400 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:57.950152   77400 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:57.950204   77400 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:57.950352   77400 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-407991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:57.950453   77400 kubeadm.go:309] [bootstrap-token] Using token: cwotot.4qmmrydp0nd6w5tq
	I0422 18:30:57.951938   77400 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:57.952040   77400 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:57.952134   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:57.952285   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:57.952410   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:57.952535   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:57.952666   77400 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:57.952799   77400 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:57.952867   77400 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:57.952936   77400 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:57.952952   77400 kubeadm.go:309] 
	I0422 18:30:57.953013   77400 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:57.953019   77400 kubeadm.go:309] 
	I0422 18:30:57.953084   77400 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:57.953090   77400 kubeadm.go:309] 
	I0422 18:30:57.953110   77400 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:57.953199   77400 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:57.953281   77400 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:57.953289   77400 kubeadm.go:309] 
	I0422 18:30:57.953374   77400 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:57.953381   77400 kubeadm.go:309] 
	I0422 18:30:57.953453   77400 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:57.953461   77400 kubeadm.go:309] 
	I0422 18:30:57.953538   77400 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:57.953636   77400 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:57.953719   77400 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:57.953726   77400 kubeadm.go:309] 
	I0422 18:30:57.953813   77400 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:57.953919   77400 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:57.953930   77400 kubeadm.go:309] 
	I0422 18:30:57.954047   77400 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954187   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:57.954222   77400 kubeadm.go:309] 	--control-plane 
	I0422 18:30:57.954232   77400 kubeadm.go:309] 
	I0422 18:30:57.954364   77400 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:57.954374   77400 kubeadm.go:309] 
	I0422 18:30:57.954440   77400 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954553   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:57.954574   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:30:57.954583   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:57.956278   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:57.957592   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:57.970080   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:57.991711   77400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:57.991779   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:57.991780   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-407991 minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=no-preload-407991 minikube.k8s.io/primary=true
	I0422 18:30:58.232025   77400 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:58.232162   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:58.732395   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.232855   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.732187   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.232654   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.732995   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.232856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.732735   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.232474   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.732930   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.232411   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.732457   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.232888   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.732856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.232873   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.733177   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.232682   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.733241   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.232711   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.732922   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.232815   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.732377   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.232576   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.732243   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.232350   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.732764   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.232338   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.357414   77400 kubeadm.go:1107] duration metric: took 13.365692776s to wait for elevateKubeSystemPrivileges
	W0422 18:31:11.357460   77400 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:31:11.357472   77400 kubeadm.go:393] duration metric: took 5m11.48385131s to StartCluster
	I0422 18:31:11.357493   77400 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.357565   77400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:31:11.359176   77400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.359391   77400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:31:11.360948   77400 out.go:177] * Verifying Kubernetes components...
	I0422 18:31:11.359461   77400 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:31:11.359641   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:31:11.362433   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:31:11.362446   77400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-407991"
	I0422 18:31:11.362464   77400 addons.go:69] Setting default-storageclass=true in profile "no-preload-407991"
	I0422 18:31:11.362486   77400 addons.go:69] Setting metrics-server=true in profile "no-preload-407991"
	I0422 18:31:11.362495   77400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-407991"
	I0422 18:31:11.362500   77400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-407991"
	I0422 18:31:11.362515   77400 addons.go:234] Setting addon metrics-server=true in "no-preload-407991"
	W0422 18:31:11.362527   77400 addons.go:243] addon metrics-server should already be in state true
	W0422 18:31:11.362506   77400 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:31:11.362557   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362567   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362929   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362932   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362963   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362971   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362974   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.363144   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.379089   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0422 18:31:11.379582   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.380121   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.380145   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.380496   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.381098   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.381132   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.383229   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 18:31:11.383513   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0422 18:31:11.383642   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.383977   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.384136   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384148   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384552   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.384754   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384770   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384801   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.385103   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.386102   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.386130   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.388554   77400 addons.go:234] Setting addon default-storageclass=true in "no-preload-407991"
	W0422 18:31:11.388569   77400 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:31:11.388589   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.388921   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.388938   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.401669   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0422 18:31:11.402268   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.402852   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.402869   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.403427   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.403610   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.404849   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 18:31:11.405356   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.405588   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.406112   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.406129   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.407696   77400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:31:11.406649   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.409174   77400 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.409195   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:31:11.409214   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.409261   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.411378   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.412836   77400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:31:11.411939   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0422 18:31:11.414011   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:31:11.414027   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:31:11.413155   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.414045   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.414069   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.413487   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.414097   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.413841   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.414686   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.414781   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.414794   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.414871   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.415256   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.415607   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.416288   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.416343   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.417257   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417623   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.417644   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417898   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.418074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.418325   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.418468   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.432218   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0422 18:31:11.432682   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.433096   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.433108   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.433685   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.433887   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.435675   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.435931   77400 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.435952   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:31:11.435969   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.438700   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439107   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.439144   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439237   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.439482   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.439662   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.439833   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.610190   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:31:11.654061   77400 node_ready.go:35] waiting up to 6m0s for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663869   77400 node_ready.go:49] node "no-preload-407991" has status "Ready":"True"
	I0422 18:31:11.663904   77400 node_ready.go:38] duration metric: took 9.806821ms for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663917   77400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:11.673895   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:11.752785   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.770023   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:31:11.770054   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:31:11.799895   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.872083   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:31:11.872113   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:31:11.984597   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:11.984626   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:31:12.059137   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:13.130584   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330646778s)
	I0422 18:31:13.130694   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130718   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.130716   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37789401s)
	I0422 18:31:13.130833   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130847   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131067   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131135   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131159   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131172   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131289   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131304   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131312   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131319   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131327   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.131559   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131574   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131601   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131621   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131621   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.173181   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.173205   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.173478   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.173501   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.279764   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.220585481s)
	I0422 18:31:13.279813   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.279828   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280221   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280241   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280261   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280276   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.280290   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280532   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280570   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280577   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280586   77400 addons.go:470] Verifying addon metrics-server=true in "no-preload-407991"
	I0422 18:31:13.282757   77400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:31:13.284029   77400 addons.go:505] duration metric: took 1.924572004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:31:13.681968   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.682004   77400 pod_ready.go:81] duration metric: took 2.008061657s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.682017   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687240   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.687268   77400 pod_ready.go:81] duration metric: took 5.242949ms for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687281   77400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693047   77400 pod_ready.go:92] pod "etcd-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.693074   77400 pod_ready.go:81] duration metric: took 5.784769ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693086   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705008   77400 pod_ready.go:92] pod "kube-apiserver-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.705028   77400 pod_ready.go:81] duration metric: took 11.934672ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705037   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721814   77400 pod_ready.go:92] pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.721840   77400 pod_ready.go:81] duration metric: took 16.796546ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721855   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079660   77400 pod_ready.go:92] pod "kube-proxy-47g8k" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.079681   77400 pod_ready.go:81] duration metric: took 357.819791ms for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079692   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480000   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.480026   77400 pod_ready.go:81] duration metric: took 400.326493ms for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480037   77400 pod_ready.go:38] duration metric: took 2.816106046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:14.480054   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:31:14.480123   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:31:14.508798   77400 api_server.go:72] duration metric: took 3.149365253s to wait for apiserver process to appear ...
	I0422 18:31:14.508822   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:31:14.508842   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:31:14.523293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:31:14.524410   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:31:14.524439   77400 api_server.go:131] duration metric: took 15.608906ms to wait for apiserver health ...
	I0422 18:31:14.524448   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:31:14.682120   77400 system_pods.go:59] 9 kube-system pods found
	I0422 18:31:14.682152   77400 system_pods.go:61] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:14.682157   77400 system_pods.go:61] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:14.682161   77400 system_pods.go:61] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:14.682164   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:14.682169   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:14.682173   77400 system_pods.go:61] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:14.682178   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:14.682188   77400 system_pods.go:61] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:14.682194   77400 system_pods.go:61] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:14.682205   77400 system_pods.go:74] duration metric: took 157.750249ms to wait for pod list to return data ...
	I0422 18:31:14.682222   77400 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:31:14.878556   77400 default_sa.go:45] found service account: "default"
	I0422 18:31:14.878581   77400 default_sa.go:55] duration metric: took 196.353021ms for default service account to be created ...
	I0422 18:31:14.878590   77400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:31:15.081385   77400 system_pods.go:86] 9 kube-system pods found
	I0422 18:31:15.081415   77400 system_pods.go:89] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:15.081425   77400 system_pods.go:89] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:15.081430   77400 system_pods.go:89] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:15.081434   77400 system_pods.go:89] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:15.081438   77400 system_pods.go:89] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:15.081448   77400 system_pods.go:89] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:15.081452   77400 system_pods.go:89] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:15.081458   77400 system_pods.go:89] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:15.081464   77400 system_pods.go:89] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:15.081476   77400 system_pods.go:126] duration metric: took 202.881032ms to wait for k8s-apps to be running ...
	I0422 18:31:15.081484   77400 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:31:15.081530   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:15.098245   77400 system_svc.go:56] duration metric: took 16.748933ms WaitForService to wait for kubelet
	I0422 18:31:15.098278   77400 kubeadm.go:576] duration metric: took 3.738847086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:31:15.098302   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:31:15.278812   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:31:15.278839   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:31:15.278848   77400 node_conditions.go:105] duration metric: took 180.541553ms to run NodePressure ...
	I0422 18:31:15.278859   77400 start.go:240] waiting for startup goroutines ...
	I0422 18:31:15.278866   77400 start.go:245] waiting for cluster config update ...
	I0422 18:31:15.278875   77400 start.go:254] writing updated cluster config ...
	I0422 18:31:15.279242   77400 ssh_runner.go:195] Run: rm -f paused
	I0422 18:31:15.330788   77400 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:31:15.333274   77400 out.go:177] * Done! kubectl is now configured to use "no-preload-407991" cluster and "default" namespace by default
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.431676390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3ad48ebc-cdde-43d9-bad2-db7763c6fa60 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.433121052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee34f9c4-3fec-4efb-8ff8-4c93cfa5bfa7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.433768745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811164433742199,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee34f9c4-3fec-4efb-8ff8-4c93cfa5bfa7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.434392372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95a62bf1-46b8-418e-b375-b9e125cebd90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.434471150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95a62bf1-46b8-418e-b375-b9e125cebd90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.435081648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95a62bf1-46b8-418e-b375-b9e125cebd90 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.441650001Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=174e8534-6e30-47c2-8d39-a333f4530e8e name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.442334797Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c936b1b0faef0e8ab67be68d393543b0e259e4c1a8c7aff264a326baf35ab528,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-jmdnk,Uid:54d9a335-db4a-417d-9909-256d3a2b7fd0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810619971045010,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-jmdnk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54d9a335-db4a-417d-9909-256d3a2b7fd0,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:30:19.658158991Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9998f3b2-a39c-4b2c-a7c2-f02a
ec08f548,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810619822590983,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-22T18:30:19.509831197Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vc6vz,Uid:8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810618384700516,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:30:18.064699169Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-jg8h6,Uid:031f1940
-ae96-44ae-a69c-ea0bbdce81fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810618295544430,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:30:17.985420889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&PodSandboxMetadata{Name:kube-proxy-4m8cm,Uid:f0673173-2469-4cef-9bef-1bee7504559c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810618053135647,Labels:map[string]string{controller-revision-hash: 79cf874c65,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-22T18:30:17.727597820Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-856422,Uid:5579cb4c8bced1b607425c27b729efcf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713810597338015199,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.206:8444,kubernetes.io/config.hash: 5579cb4c8bced1b607425c27b729efcf,kubernetes.io/config.seen: 2024-04-22T18:29:56.871833275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{I
d:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-856422,Uid:f90176445cd3959e25174c08c1688c45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810597336395633,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f90176445cd3959e25174c08c1688c45,kubernetes.io/config.seen: 2024-04-22T18:29:56.871835226Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-856422,Uid:ff0e65cc4308339ea8fadc15bcfa2684,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810597323804
037,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ff0e65cc4308339ea8fadc15bcfa2684,kubernetes.io/config.seen: 2024-04-22T18:29:56.871834375Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-856422,Uid:0f3621ba1fcbb888b66b3d2a075e4fa1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713810597316276768,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,tier: control-plane,},Annotat
ions:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.206:2379,kubernetes.io/config.hash: 0f3621ba1fcbb888b66b3d2a075e4fa1,kubernetes.io/config.seen: 2024-04-22T18:29:56.871829109Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-856422,Uid:5579cb4c8bced1b607425c27b729efcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713810304827257658,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.206:8444,kubernetes.io/config.hash: 5579cb4c8bced1b607425c27b729efcf,kubernetes.io/config.s
een: 2024-04-22T18:25:04.357465690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=174e8534-6e30-47c2-8d39-a333f4530e8e name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.443127997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dfdb18b-bcb3-43e1-b350-153c8230d09a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.443178142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dfdb18b-bcb3-43e1-b350-153c8230d09a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.443423501Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dfdb18b-bcb3-43e1-b350-153c8230d09a name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.484227851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5a2e62e-bd2a-4163-85d0-90e5a94db19b name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.484321302Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5a2e62e-bd2a-4163-85d0-90e5a94db19b name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.485610058Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cebb9b2c-0e2f-4265-9e11-84bd36f1f684 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.486182257Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811164486154835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cebb9b2c-0e2f-4265-9e11-84bd36f1f684 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.486680448Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3fcc62e-6b89-47ce-89d4-3cadd6b9705c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.486733201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3fcc62e-6b89-47ce-89d4-3cadd6b9705c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.487152551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3fcc62e-6b89-47ce-89d4-3cadd6b9705c name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.525196749Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09817175-cbbc-4b5a-b600-e88ce59ad37e name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.525295063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09817175-cbbc-4b5a-b600-e88ce59ad37e name=/runtime.v1.RuntimeService/Version
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.526537326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9c15984-6e12-4a00-8b81-609b94873074 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.527075992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811164527050627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9c15984-6e12-4a00-8b81-609b94873074 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.527658804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae3dd1a1-8439-462e-ba29-0d0d35fe4b01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.527716569Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae3dd1a1-8439-462e-ba29-0d0d35fe4b01 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:39:24 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:39:24.527914369Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae3dd1a1-8439-462e-ba29-0d0d35fe4b01 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ee4eac8d0dfa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2b37946810279       storage-provisioner
	abf55b7ba4ed6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   1c76f6957c523       coredns-7db6d8ff4d-vc6vz
	39ab7d17fd2ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   5fb5d022981c9       coredns-7db6d8ff4d-jg8h6
	e08675236130d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   ef5982a75f623       kube-proxy-4m8cm
	3d96267bdd14c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   f475b95b1aca6       kube-scheduler-default-k8s-diff-port-856422
	2532288e8ed99       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   1b498a2ed492d       etcd-default-k8s-diff-port-856422
	5e4ca3cad7be0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   36f42c5c15adb       kube-controller-manager-default-k8s-diff-port-856422
	2540e6dbfeb70       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   c52878a5f3ab1       kube-apiserver-default-k8s-diff-port-856422
	fdb735d23867d       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   14 minutes ago      Exited              kube-apiserver            1                   9dff4617c7e86       kube-apiserver-default-k8s-diff-port-856422
	
	
	==> coredns [39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-856422
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-856422
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=default-k8s-diff-port-856422
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:30:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-856422
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:39:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:35:29 +0000   Mon, 22 Apr 2024 18:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:35:29 +0000   Mon, 22 Apr 2024 18:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:35:29 +0000   Mon, 22 Apr 2024 18:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:35:29 +0000   Mon, 22 Apr 2024 18:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.206
	  Hostname:    default-k8s-diff-port-856422
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bc25147b44b4422871f3fb405e24b9c
	  System UUID:                3bc25147-b44b-4422-871f-3fb405e24b9c
	  Boot ID:                    af94f6ce-ea73-4043-b56f-415b0dd034ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jg8h6                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-7db6d8ff4d-vc6vz                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-856422                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-856422             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-856422    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-4m8cm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-856422             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-569cc877fc-jmdnk                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m28s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m27s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m27s (x8 over 9m28s)  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m27s (x7 over 9m28s)  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-856422 event: Registered Node default-k8s-diff-port-856422 in Controller
	
	
	==> dmesg <==
	[  +0.052360] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041965] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.716838] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.852105] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.513881] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.948521] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.064468] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073789] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.219845] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.147076] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.312900] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Apr22 18:25] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.063319] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.230304] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.617553] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.558909] kauditd_printk_skb: 79 callbacks suppressed
	[Apr22 18:29] systemd-fstab-generator[3592]: Ignoring "noauto" option for root device
	[  +0.068056] kauditd_printk_skb: 9 callbacks suppressed
	[Apr22 18:30] systemd-fstab-generator[3908]: Ignoring "noauto" option for root device
	[  +0.080586] kauditd_printk_skb: 54 callbacks suppressed
	[ +14.871170] systemd-fstab-generator[4121]: Ignoring "noauto" option for root device
	[  +0.113616] kauditd_printk_skb: 12 callbacks suppressed
	[Apr22 18:31] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462] <==
	{"level":"info","ts":"2024-04-22T18:29:58.03336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 switched to configuration voters=(5415665503965951542)"}
	{"level":"info","ts":"2024-04-22T18:29:58.033471Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d466202ffa4fc203","local-member-id":"4b284f151a3a3636","added-peer-id":"4b284f151a3a3636","added-peer-peer-urls":["https://192.168.61.206:2380"]}
	{"level":"info","ts":"2024-04-22T18:29:58.058737Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:29:58.05902Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4b284f151a3a3636","initial-advertise-peer-urls":["https://192.168.61.206:2380"],"listen-peer-urls":["https://192.168.61.206:2380"],"advertise-client-urls":["https://192.168.61.206:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.206:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:29:58.05907Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:29:58.05918Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.206:2380"}
	{"level":"info","ts":"2024-04-22T18:29:58.059233Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.206:2380"}
	{"level":"info","ts":"2024-04-22T18:29:58.272031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:58.27227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:58.272303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 received MsgPreVoteResp from 4b284f151a3a3636 at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:58.272399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.272428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 received MsgVoteResp from 4b284f151a3a3636 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.272522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 became leader at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.272558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4b284f151a3a3636 elected leader 4b284f151a3a3636 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.277321Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.277981Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:58.284352Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d466202ffa4fc203","local-member-id":"4b284f151a3a3636","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.28445Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.284495Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.284514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:58.277907Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4b284f151a3a3636","local-member-attributes":"{Name:default-k8s-diff-port-856422 ClientURLs:[https://192.168.61.206:2379]}","request-path":"/0/members/4b284f151a3a3636/attributes","cluster-id":"d466202ffa4fc203","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:29:58.301534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.206:2379"}
	{"level":"info","ts":"2024-04-22T18:29:58.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:58.301694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:58.306043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:39:24 up 14 min,  0 users,  load average: 0.04, 0.16, 0.16
	Linux default-k8s-diff-port-856422 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb] <==
	I0422 18:33:20.537592       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:35:00.438322       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:35:00.438664       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0422 18:35:01.439699       1 handler_proxy.go:93] no RequestInfo found in the context
	W0422 18:35:01.439817       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:35:01.439976       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:35:01.440105       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0422 18:35:01.440058       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:35:01.442071       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:36:01.441189       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:36:01.441399       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:36:01.441411       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:36:01.442281       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:36:01.442439       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:36:01.442486       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:38:01.442554       1 handler_proxy.go:93] no RequestInfo found in the context
	W0422 18:38:01.442595       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:38:01.443062       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:38:01.443075       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0422 18:38:01.443134       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:38:01.444923       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f] <==
	W0422 18:29:51.908687       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:51.960417       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:51.994852       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.014495       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.073167       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.086811       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.178745       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.216519       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.219256       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.250598       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.278868       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.281463       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.350220       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.364651       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.364669       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.379135       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.433200       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.533018       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.570647       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.596429       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.610486       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.679603       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.789789       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:53.316138       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:53.359707       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445] <==
	I0422 18:33:50.181386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="98.338µs"
	E0422 18:34:17.815337       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:34:18.247387       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:34:47.821159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:34:48.256360       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:35:17.826392       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:35:18.264981       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:35:47.832229       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:35:48.273729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:36:17.840268       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:36:18.283321       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:36:21.182905       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="204.671µs"
	I0422 18:36:36.186164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="129.274µs"
	E0422 18:36:47.846195       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:36:48.294403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:37:17.852512       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:37:18.302783       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:37:47.858401       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:37:48.311068       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:38:17.864544       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:38:18.319225       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:38:47.870416       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:38:48.328709       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:39:17.878299       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:39:18.337289       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986] <==
	I0422 18:30:18.849498       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:30:18.875885       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.206"]
	I0422 18:30:18.961627       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:30:18.961689       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:30:18.961710       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:30:18.964822       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:30:18.965141       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:30:18.965165       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:30:18.966787       1 config.go:192] "Starting service config controller"
	I0422 18:30:18.966827       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:30:18.966853       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:30:18.966858       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:30:18.972134       1 config.go:319] "Starting node config controller"
	I0422 18:30:18.972171       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:30:19.067039       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:30:19.067101       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:30:19.072713       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac] <==
	W0422 18:30:00.490454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 18:30:00.490491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 18:30:00.490518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:30:00.490544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 18:30:00.491829       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:30:00.491888       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:30:01.417490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:01.417556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:01.457226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 18:30:01.457281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 18:30:01.545649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:01.545707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:01.552879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 18:30:01.553000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 18:30:01.569179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:30:01.569241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 18:30:01.739732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:01.739900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:01.739976       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:30:01.740058       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:30:01.780896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:30:01.781022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:30:01.787040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 18:30:01.787143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0422 18:30:03.890046       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:37:03 default-k8s-diff-port-856422 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:37:03 default-k8s-diff-port-856422 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:37:03 default-k8s-diff-port-856422 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:37:04 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:37:04.164566    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:37:18 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:37:18.164521    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:37:29 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:37:29.166815    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:37:44 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:37:44.164843    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:37:57 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:37:57.165027    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:38:03 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:38:03.190080    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:38:03 default-k8s-diff-port-856422 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:38:03 default-k8s-diff-port-856422 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:38:03 default-k8s-diff-port-856422 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:38:03 default-k8s-diff-port-856422 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:38:09 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:38:09.164406    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:38:21 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:38:21.163836    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:38:36 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:38:36.166305    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:38:48 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:38:48.164646    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:39:02 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:39:02.163858    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:39:03 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:39:03.191163    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:39:03 default-k8s-diff-port-856422 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:39:03 default-k8s-diff-port-856422 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:39:03 default-k8s-diff-port-856422 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:39:03 default-k8s-diff-port-856422 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:39:13 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:39:13.165504    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:39:25 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:39:25.174325    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	
	
	==> storage-provisioner [7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633] <==
	I0422 18:30:20.047257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:30:20.067226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:30:20.067310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:30:20.089915       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:30:20.090314       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-856422_13ff8122-b447-4862-9058-e11fab20460d!
	I0422 18:30:20.090582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c82121e4-0669-4a12-a537-ff70e2307a04", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-856422_13ff8122-b447-4862-9058-e11fab20460d became leader
	I0422 18:30:20.190579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-856422_13ff8122-b447-4862-9058-e11fab20460d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jmdnk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 describe pod metrics-server-569cc877fc-jmdnk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-856422 describe pod metrics-server-569cc877fc-jmdnk: exit status 1 (64.666342ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jmdnk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-856422 describe pod metrics-server-569cc877fc-jmdnk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0422 18:31:19.002904   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 18:32:45.496389   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:33:09.194466   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:33:10.953121   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 18:33:20.338867   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407991 -n no-preload-407991
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-22 18:40:15.907569723 +0000 UTC m=+6200.637183332
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407991 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-407991 logs -n 25: (2.096499845s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo cat                              | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:21:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:21:45.719482   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:48.791433   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:54.871446   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:57.943441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:04.023441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:07.095417   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:13.175430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:16.247522   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:22.327414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:25.399441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:31.479440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:34.551439   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:40.631451   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:43.703447   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:49.783400   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:52.855484   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:58.935464   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:02.007435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:08.087442   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:11.159452   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:17.239435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:20.311430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:26.391420   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:29.463418   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:35.543443   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:38.615421   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:44.695419   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:47.767475   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:53.847471   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:56.919436   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:02.999404   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:06.071458   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:12.151440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:15.223414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:18.227587   77634 start.go:364] duration metric: took 4m29.759611802s to acquireMachinesLock for "embed-certs-782377"
	I0422 18:24:18.227650   77634 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:18.227661   77634 fix.go:54] fixHost starting: 
	I0422 18:24:18.227979   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:18.228013   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:18.243001   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 18:24:18.243415   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:18.243835   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:24:18.243850   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:18.244219   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:18.244384   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:18.244534   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:24:18.246202   77634 fix.go:112] recreateIfNeeded on embed-certs-782377: state=Stopped err=<nil>
	I0422 18:24:18.246228   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	W0422 18:24:18.246399   77634 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:18.248257   77634 out.go:177] * Restarting existing kvm2 VM for "embed-certs-782377" ...
	I0422 18:24:18.249777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Start
	I0422 18:24:18.249966   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring networks are active...
	I0422 18:24:18.250666   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network default is active
	I0422 18:24:18.251036   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network mk-embed-certs-782377 is active
	I0422 18:24:18.251499   77634 main.go:141] libmachine: (embed-certs-782377) Getting domain xml...
	I0422 18:24:18.252150   77634 main.go:141] libmachine: (embed-certs-782377) Creating domain...
	I0422 18:24:18.225125   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:18.225168   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225565   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:24:18.225593   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225781   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:24:18.227460   77400 machine.go:97] duration metric: took 4m37.410379606s to provisionDockerMachine
	I0422 18:24:18.227495   77400 fix.go:56] duration metric: took 4m37.433636251s for fixHost
	I0422 18:24:18.227499   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 4m37.433656207s
	W0422 18:24:18.227517   77400 start.go:713] error starting host: provision: host is not running
	W0422 18:24:18.227584   77400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0422 18:24:18.227593   77400 start.go:728] Will try again in 5 seconds ...
	I0422 18:24:19.442937   77634 main.go:141] libmachine: (embed-certs-782377) Waiting to get IP...
	I0422 18:24:19.444048   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.444425   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.444484   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.444392   78906 retry.go:31] will retry after 283.008432ms: waiting for machine to come up
	I0422 18:24:19.729076   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.729457   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.729493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.729411   78906 retry.go:31] will retry after 252.047573ms: waiting for machine to come up
	I0422 18:24:19.983011   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.983417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.983442   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.983397   78906 retry.go:31] will retry after 300.528755ms: waiting for machine to come up
	I0422 18:24:20.286039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.286467   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.286500   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.286425   78906 retry.go:31] will retry after 426.555496ms: waiting for machine to come up
	I0422 18:24:20.715191   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.715601   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.715638   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.715525   78906 retry.go:31] will retry after 533.433633ms: waiting for machine to come up
	I0422 18:24:21.250151   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:21.250702   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:21.250732   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:21.250646   78906 retry.go:31] will retry after 854.033547ms: waiting for machine to come up
	I0422 18:24:22.106728   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.107083   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.107109   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.107036   78906 retry.go:31] will retry after 761.233698ms: waiting for machine to come up
	I0422 18:24:22.870007   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.870408   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.870435   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.870364   78906 retry.go:31] will retry after 1.121568589s: waiting for machine to come up
	I0422 18:24:23.229316   77400 start.go:360] acquireMachinesLock for no-preload-407991: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:23.993127   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:23.993600   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:23.993623   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:23.993535   78906 retry.go:31] will retry after 1.525222377s: waiting for machine to come up
	I0422 18:24:25.520203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:25.520584   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:25.520609   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:25.520557   78906 retry.go:31] will retry after 1.618927059s: waiting for machine to come up
	I0422 18:24:27.140862   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:27.141363   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:27.141391   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:27.141315   78906 retry.go:31] will retry after 1.828869827s: waiting for machine to come up
	I0422 18:24:28.972053   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:28.972472   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:28.972508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:28.972438   78906 retry.go:31] will retry after 2.456935091s: waiting for machine to come up
	I0422 18:24:31.430825   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:31.431208   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:31.431266   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:31.431181   78906 retry.go:31] will retry after 3.415431602s: waiting for machine to come up
	I0422 18:24:36.144008   77929 start.go:364] duration metric: took 4m11.537292071s to acquireMachinesLock for "default-k8s-diff-port-856422"
	I0422 18:24:36.144073   77929 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:36.144079   77929 fix.go:54] fixHost starting: 
	I0422 18:24:36.144413   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:36.144450   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:36.161253   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0422 18:24:36.161715   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:36.162147   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:24:36.162166   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:36.162536   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:36.162743   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:36.162914   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:24:36.164366   77929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-856422: state=Stopped err=<nil>
	I0422 18:24:36.164397   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	W0422 18:24:36.164563   77929 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:36.166915   77929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-856422" ...
	I0422 18:24:34.847819   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848316   77634 main.go:141] libmachine: (embed-certs-782377) Found IP for machine: 192.168.50.114
	I0422 18:24:34.848339   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has current primary IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848357   77634 main.go:141] libmachine: (embed-certs-782377) Reserving static IP address...
	I0422 18:24:34.848741   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.848769   77634 main.go:141] libmachine: (embed-certs-782377) DBG | skip adding static IP to network mk-embed-certs-782377 - found existing host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"}
	I0422 18:24:34.848782   77634 main.go:141] libmachine: (embed-certs-782377) Reserved static IP address: 192.168.50.114
	I0422 18:24:34.848801   77634 main.go:141] libmachine: (embed-certs-782377) Waiting for SSH to be available...
	I0422 18:24:34.848808   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Getting to WaitForSSH function...
	I0422 18:24:34.850829   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851167   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.851199   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851332   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH client type: external
	I0422 18:24:34.851352   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa (-rw-------)
	I0422 18:24:34.851383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:34.851402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | About to run SSH command:
	I0422 18:24:34.851417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | exit 0
	I0422 18:24:34.975383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:34.975812   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetConfigRaw
	I0422 18:24:34.976602   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:34.979578   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.979959   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.979992   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.980238   77634 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:24:34.980472   77634 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:34.980497   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:34.980777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:34.983493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.983958   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.983999   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.984175   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:34.984372   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984710   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:34.984894   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:34.985074   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:34.985086   77634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:35.099838   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:35.099873   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100144   77634 buildroot.go:166] provisioning hostname "embed-certs-782377"
	I0422 18:24:35.100169   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100381   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.103203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103589   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.103618   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103754   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.103930   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104116   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104262   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.104446   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.104696   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.104720   77634 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-782377 && echo "embed-certs-782377" | sudo tee /etc/hostname
	I0422 18:24:35.223934   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-782377
	
	I0422 18:24:35.223962   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.227033   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227376   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.227413   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.227779   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.227976   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.228140   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.228334   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.228492   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.228508   77634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-782377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-782377/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-782377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:35.346513   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:35.346545   77634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:35.346561   77634 buildroot.go:174] setting up certificates
	I0422 18:24:35.346571   77634 provision.go:84] configureAuth start
	I0422 18:24:35.346598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.346898   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:35.349820   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350164   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.350192   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350301   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.352921   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353288   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.353314   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353488   77634 provision.go:143] copyHostCerts
	I0422 18:24:35.353543   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:35.353552   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:35.353619   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:35.353717   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:35.353725   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:35.353749   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:35.353801   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:35.353810   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:35.353831   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:35.353894   77634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.embed-certs-782377 san=[127.0.0.1 192.168.50.114 embed-certs-782377 localhost minikube]
	I0422 18:24:35.463676   77634 provision.go:177] copyRemoteCerts
	I0422 18:24:35.463733   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:35.463758   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.466567   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.467039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.467415   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.467605   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.467740   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.549947   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:35.576364   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:24:35.601539   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:35.625959   77634 provision.go:87] duration metric: took 279.37435ms to configureAuth
	I0422 18:24:35.625992   77634 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:35.626171   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:35.626235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.629095   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.629533   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629707   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.629934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630077   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630238   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.630365   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.630546   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.630563   77634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:35.906862   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:35.906892   77634 machine.go:97] duration metric: took 926.403466ms to provisionDockerMachine
	I0422 18:24:35.906905   77634 start.go:293] postStartSetup for "embed-certs-782377" (driver="kvm2")
	I0422 18:24:35.906916   77634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:35.906934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:35.907241   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:35.907277   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.910029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.910438   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910599   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.910782   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.910993   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.911168   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.994189   77634 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:35.998376   77634 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:35.998395   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:35.998468   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:35.998545   77634 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:35.998650   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:36.008268   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:36.034031   77634 start.go:296] duration metric: took 127.110389ms for postStartSetup
	I0422 18:24:36.034081   77634 fix.go:56] duration metric: took 17.806421597s for fixHost
	I0422 18:24:36.034100   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.036964   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037357   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.037380   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.037775   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038051   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.038403   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:36.038568   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:36.038579   77634 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:36.143878   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810276.108619822
	
	I0422 18:24:36.143903   77634 fix.go:216] guest clock: 1713810276.108619822
	I0422 18:24:36.143911   77634 fix.go:229] Guest: 2024-04-22 18:24:36.108619822 +0000 UTC Remote: 2024-04-22 18:24:36.034084746 +0000 UTC m=+287.715620683 (delta=74.535076ms)
	I0422 18:24:36.143936   77634 fix.go:200] guest clock delta is within tolerance: 74.535076ms
	I0422 18:24:36.143941   77634 start.go:83] releasing machines lock for "embed-certs-782377", held for 17.916313877s
	I0422 18:24:36.143966   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.144235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:36.146867   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147228   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.147257   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147431   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.147883   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148066   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148171   77634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:36.148218   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.148377   77634 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:36.148403   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.150838   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151150   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151176   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151268   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151296   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.151466   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.151628   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.151671   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151695   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151747   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.151880   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.152055   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.152209   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.152350   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.229109   77634 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:36.266621   77634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:36.421344   77634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:36.427814   77634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:36.427892   77634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:36.448157   77634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:36.448192   77634 start.go:494] detecting cgroup driver to use...
	I0422 18:24:36.448255   77634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:36.468930   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:36.485780   77634 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:36.485856   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:36.502182   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:36.521179   77634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:36.636244   77634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:36.783292   77634 docker.go:233] disabling docker service ...
	I0422 18:24:36.783366   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:36.803014   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:36.817938   77634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:36.957954   77634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:37.085750   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:37.101054   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:37.123504   77634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:37.123555   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.134422   77634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:37.134491   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.145961   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.157192   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.170117   77634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:37.188656   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.205792   77634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.225739   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.236719   77634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:37.246351   77634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:37.246401   77634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:37.261144   77634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:37.271464   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:37.395686   77634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:37.534079   77634 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:37.534156   77634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:37.539212   77634 start.go:562] Will wait 60s for crictl version
	I0422 18:24:37.539285   77634 ssh_runner.go:195] Run: which crictl
	I0422 18:24:37.543239   77634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:37.581460   77634 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:37.581562   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.611743   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.645811   77634 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:37.647247   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:37.650321   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.650811   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:37.650841   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.651055   77634 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:37.655865   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:37.673617   77634 kubeadm.go:877] updating cluster {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:37.673732   77634 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:37.673785   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:37.718534   77634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:37.718609   77634 ssh_runner.go:195] Run: which lz4
	I0422 18:24:37.723369   77634 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:37.728270   77634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:37.728303   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:36.168344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Start
	I0422 18:24:36.168494   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring networks are active...
	I0422 18:24:36.169419   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network default is active
	I0422 18:24:36.169811   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network mk-default-k8s-diff-port-856422 is active
	I0422 18:24:36.170341   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Getting domain xml...
	I0422 18:24:36.171019   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Creating domain...
	I0422 18:24:37.407148   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting to get IP...
	I0422 18:24:37.408083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408430   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408509   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.408416   79040 retry.go:31] will retry after 267.855158ms: waiting for machine to come up
	I0422 18:24:37.677765   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678134   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678168   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.678084   79040 retry.go:31] will retry after 267.61504ms: waiting for machine to come up
	I0422 18:24:37.947737   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948250   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.948216   79040 retry.go:31] will retry after 351.088664ms: waiting for machine to come up
	I0422 18:24:38.300548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301057   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301090   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.301011   79040 retry.go:31] will retry after 560.164848ms: waiting for machine to come up
	I0422 18:24:38.862557   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863114   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.863075   79040 retry.go:31] will retry after 590.286684ms: waiting for machine to come up
	I0422 18:24:39.454925   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455483   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455510   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:39.455428   79040 retry.go:31] will retry after 870.474888ms: waiting for machine to come up
	I0422 18:24:39.338447   77634 crio.go:462] duration metric: took 1.615205556s to copy over tarball
	I0422 18:24:39.338545   77634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:41.640474   77634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301883484s)
	I0422 18:24:41.640514   77634 crio.go:469] duration metric: took 2.302038123s to extract the tarball
	I0422 18:24:41.640524   77634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:24:41.680325   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:41.724755   77634 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:24:41.724777   77634 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:24:41.724785   77634 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.30.0 crio true true} ...
	I0422 18:24:41.724887   77634 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-782377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:24:41.724964   77634 ssh_runner.go:195] Run: crio config
	I0422 18:24:41.772680   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:41.772704   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:41.772715   77634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:24:41.772733   77634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-782377 NodeName:embed-certs-782377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:24:41.772898   77634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-782377"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:24:41.772964   77634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:24:41.783492   77634 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:24:41.783575   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:24:41.793500   77634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0422 18:24:41.810415   77634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:24:41.827504   77634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0422 18:24:41.845704   77634 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0422 18:24:41.849728   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:41.862798   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:41.998260   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:24:42.018779   77634 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377 for IP: 192.168.50.114
	I0422 18:24:42.018801   77634 certs.go:194] generating shared ca certs ...
	I0422 18:24:42.018820   77634 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:24:42.018977   77634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:24:42.019034   77634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:24:42.019048   77634 certs.go:256] generating profile certs ...
	I0422 18:24:42.019146   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/client.key
	I0422 18:24:42.019218   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key.d804c20e
	I0422 18:24:42.019298   77634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key
	I0422 18:24:42.019455   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:24:42.019493   77634 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:24:42.019509   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:24:42.019539   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:24:42.019571   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:24:42.019606   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:24:42.019665   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:42.020460   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:24:42.065297   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:24:42.098581   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:24:42.139751   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:24:42.169770   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:24:42.199958   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:24:42.229298   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:24:42.254517   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:24:42.279390   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:24:42.303872   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:24:42.329704   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:24:42.355108   77634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:24:42.372684   77634 ssh_runner.go:195] Run: openssl version
	I0422 18:24:42.378631   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:24:42.389709   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394492   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394552   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.400346   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:24:42.411335   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:24:42.422568   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427213   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427278   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.433277   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:24:42.444618   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:24:42.455793   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460681   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460739   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.466785   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:24:42.485401   77634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:24:42.491205   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:24:42.498635   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:24:42.510577   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:24:42.517596   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:24:42.524413   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:24:42.530872   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:24:42.537199   77634 kubeadm.go:391] StartCluster: {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:24:42.537319   77634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:24:42.537379   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.579863   77634 cri.go:89] found id: ""
	I0422 18:24:42.579944   77634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:24:42.590756   77634 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:24:42.590781   77634 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:24:42.590788   77634 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:24:42.590844   77634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:24:42.601517   77634 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:24:42.603120   77634 kubeconfig.go:125] found "embed-certs-782377" server: "https://192.168.50.114:8443"
	I0422 18:24:42.606189   77634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:24:42.616881   77634 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0422 18:24:42.616911   77634 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:24:42.616922   77634 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:24:42.616970   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.656829   77634 cri.go:89] found id: ""
	I0422 18:24:42.656923   77634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:24:42.675575   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:24:42.686408   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:24:42.686431   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:24:42.686484   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:24:42.697303   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:24:42.697391   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:24:42.707693   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:24:42.717836   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:24:42.717932   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:24:42.729952   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.740902   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:24:42.740980   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.751946   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:24:42.761758   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:24:42.761830   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:24:42.772699   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:24:42.783018   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:42.891737   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:40.327325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327782   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:40.327726   79040 retry.go:31] will retry after 926.321969ms: waiting for machine to come up
	I0422 18:24:41.255601   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256117   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:41.256072   79040 retry.go:31] will retry after 928.33371ms: waiting for machine to come up
	I0422 18:24:42.186290   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186798   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186826   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:42.186762   79040 retry.go:31] will retry after 1.708117553s: waiting for machine to come up
	I0422 18:24:43.896236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:43.896597   79040 retry.go:31] will retry after 1.720003793s: waiting for machine to come up
	I0422 18:24:44.055395   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163622709s)
	I0422 18:24:44.055429   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.278840   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.351743   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.460115   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:24:44.460202   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:44.960631   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.460588   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.478048   77634 api_server.go:72] duration metric: took 1.017932232s to wait for apiserver process to appear ...
	I0422 18:24:45.478082   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:24:45.478104   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:45.478702   77634 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0422 18:24:45.978527   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.247298   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:24:48.247334   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:24:48.247351   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.295953   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.296005   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.478899   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.488884   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.488920   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.978472   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.992521   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.992552   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:49.479179   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:49.485588   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:24:49.493015   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:24:49.493055   77634 api_server.go:131] duration metric: took 4.01496465s to wait for apiserver health ...
	I0422 18:24:49.493065   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:49.493074   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:49.494997   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:24:45.618240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618714   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618744   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:45.618673   79040 retry.go:31] will retry after 2.396679945s: waiting for machine to come up
	I0422 18:24:48.016812   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017231   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017258   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:48.017197   79040 retry.go:31] will retry after 2.304959564s: waiting for machine to come up
	I0422 18:24:49.496476   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:24:49.516525   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:24:49.541103   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:24:49.552224   77634 system_pods.go:59] 8 kube-system pods found
	I0422 18:24:49.552263   77634 system_pods.go:61] "coredns-7db6d8ff4d-lxcv2" [137ad3db-8bc5-4b7f-8eb0-12a278eba41c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:24:49.552273   77634 system_pods.go:61] "etcd-embed-certs-782377" [85322e31-1ad6-4239-8086-f2a465a28d8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:24:49.552287   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [e791d7d4-a94d-4cce-a50d-4e569350f210] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:24:49.552307   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [cbcc2e7f-7b3a-435b-97d5-5b69b7e399c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:24:49.552317   77634 system_pods.go:61] "kube-proxy-r4249" [7ffb3b8f-53d8-45df-8426-74f0ffb0d20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 18:24:49.552327   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [9568040b-3eca-403e-b078-d6f2071e70c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:24:49.552335   77634 system_pods.go:61] "metrics-server-569cc877fc-d8s5p" [3bcda1df-02f7-4405-95c7-4d8559a0138c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:24:49.552342   77634 system_pods.go:61] "storage-provisioner" [c196d779-346a-4e3f-b1c3-dde4292df017] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:24:49.552351   77634 system_pods.go:74] duration metric: took 11.221599ms to wait for pod list to return data ...
	I0422 18:24:49.552373   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:24:49.556086   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:24:49.556130   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:24:49.556142   77634 node_conditions.go:105] duration metric: took 3.764067ms to run NodePressure ...
	I0422 18:24:49.556161   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:49.852023   77634 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856866   77634 kubeadm.go:733] kubelet initialised
	I0422 18:24:49.856894   77634 kubeadm.go:734] duration metric: took 4.83996ms waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856904   77634 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:24:49.863808   77634 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.868817   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868840   77634 pod_ready.go:81] duration metric: took 5.001181ms for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.868849   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868855   77634 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.873591   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873612   77634 pod_ready.go:81] duration metric: took 4.750292ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.873621   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873627   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.878471   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878494   77634 pod_ready.go:81] duration metric: took 4.859998ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.878503   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878510   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.945869   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945909   77634 pod_ready.go:81] duration metric: took 67.385628ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.945923   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945932   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345633   77634 pod_ready.go:92] pod "kube-proxy-r4249" in "kube-system" namespace has status "Ready":"True"
	I0422 18:24:50.345655   77634 pod_ready.go:81] duration metric: took 399.713725ms for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345666   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:52.352988   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:50.324396   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324920   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324953   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:50.324894   79040 retry.go:31] will retry after 4.018790507s: waiting for machine to come up
	I0422 18:24:54.347584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348046   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Found IP for machine: 192.168.61.206
	I0422 18:24:54.348081   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has current primary IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348094   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserving static IP address...
	I0422 18:24:54.348535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserved static IP address: 192.168.61.206
	I0422 18:24:54.348560   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for SSH to be available...
	I0422 18:24:54.348584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.348624   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | skip adding static IP to network mk-default-k8s-diff-port-856422 - found existing host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"}
	I0422 18:24:54.348640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Getting to WaitForSSH function...
	I0422 18:24:54.351069   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351570   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.351608   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH client type: external
	I0422 18:24:54.351758   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa (-rw-------)
	I0422 18:24:54.351793   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:54.351810   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | About to run SSH command:
	I0422 18:24:54.351834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | exit 0
	I0422 18:24:54.479277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:54.479674   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetConfigRaw
	I0422 18:24:54.480350   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.483089   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.483498   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483801   77929 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:24:54.484031   77929 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:54.484051   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:54.484272   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.486449   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.486857   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486992   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.487178   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487470   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.487635   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.487825   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.487838   77929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:54.603732   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:54.603759   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.603993   77929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-856422"
	I0422 18:24:54.604017   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.604280   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.606938   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607302   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.607331   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607524   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.607693   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.607856   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.608002   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.608174   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.608381   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.608398   77929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-856422 && echo "default-k8s-diff-port-856422" | sudo tee /etc/hostname
	I0422 18:24:54.734622   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-856422
	
	I0422 18:24:54.734646   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.737804   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738109   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.738141   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.738495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738773   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.738950   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.739157   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.739176   77929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-856422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-856422/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-856422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:54.864646   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:54.864679   77929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:54.864732   77929 buildroot.go:174] setting up certificates
	I0422 18:24:54.864745   77929 provision.go:84] configureAuth start
	I0422 18:24:54.864764   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.865059   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.868205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868626   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.868666   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868868   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.871736   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872118   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.872147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872275   77929 provision.go:143] copyHostCerts
	I0422 18:24:54.872340   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:54.872353   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:54.872424   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:54.872545   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:54.872557   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:54.872598   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:54.872676   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:54.872688   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:54.872718   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:54.872794   77929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-856422 san=[127.0.0.1 192.168.61.206 default-k8s-diff-port-856422 localhost minikube]
	I0422 18:24:55.091765   77929 provision.go:177] copyRemoteCerts
	I0422 18:24:55.091820   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:55.091848   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.094572   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.094939   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.094970   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.095209   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.095501   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.095767   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.095958   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.192243   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:55.223313   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 18:24:55.250149   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:55.279442   77929 provision.go:87] duration metric: took 414.679508ms to configureAuth
	I0422 18:24:55.279474   77929 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:55.280056   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:55.280125   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.282806   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.283237   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283405   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.283636   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283803   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283941   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.284109   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.284276   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.284294   77929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:55.565199   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:55.565225   77929 machine.go:97] duration metric: took 1.081180365s to provisionDockerMachine
	I0422 18:24:55.565239   77929 start.go:293] postStartSetup for "default-k8s-diff-port-856422" (driver="kvm2")
	I0422 18:24:55.565282   77929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:55.565312   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.565649   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:55.565682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.568211   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.568614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568809   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.568994   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.569182   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.569352   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.654461   77929 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:55.658992   77929 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:55.659016   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:55.659091   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:55.659199   77929 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:55.659309   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:55.669183   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:55.694953   77929 start.go:296] duration metric: took 129.698973ms for postStartSetup
	I0422 18:24:55.694998   77929 fix.go:56] duration metric: took 19.550918724s for fixHost
	I0422 18:24:55.695021   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.697596   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.697926   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.697958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.698133   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.698325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698479   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698579   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.698680   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.698897   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.698914   77929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:55.812106   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810295.778892948
	
	I0422 18:24:55.812132   77929 fix.go:216] guest clock: 1713810295.778892948
	I0422 18:24:55.812143   77929 fix.go:229] Guest: 2024-04-22 18:24:55.778892948 +0000 UTC Remote: 2024-04-22 18:24:55.69500303 +0000 UTC m=+271.245786903 (delta=83.889918ms)
	I0422 18:24:55.812168   77929 fix.go:200] guest clock delta is within tolerance: 83.889918ms
	I0422 18:24:55.812176   77929 start.go:83] releasing machines lock for "default-k8s-diff-port-856422", held for 19.668119564s
	I0422 18:24:55.812213   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.812500   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:55.815404   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.815786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.815828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.816036   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816526   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816698   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816781   77929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:55.816823   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.817092   77929 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:55.817116   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.819495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819710   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819931   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.819958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820045   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.820181   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820217   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820362   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820366   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820631   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.820716   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820845   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.904810   77929 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:55.937093   77929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:56.089389   77929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:56.096144   77929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:56.096208   77929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:56.118194   77929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:56.118224   77929 start.go:494] detecting cgroup driver to use...
	I0422 18:24:56.118292   77929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:56.134918   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:56.154107   77929 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:56.154180   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:56.168971   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:56.188793   77929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:56.310223   77929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:56.492316   77929 docker.go:233] disabling docker service ...
	I0422 18:24:56.492430   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:56.515169   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:56.529734   77929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:56.670628   77929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:56.810823   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:56.826785   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:56.847682   77929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:56.847741   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.860499   77929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:56.860576   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.872086   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.883347   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.901596   77929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:56.916912   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.928121   77929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.947335   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.958431   77929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:56.968077   77929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:56.968131   77929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:56.982135   77929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:56.991801   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:57.125635   77929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:57.263889   77929 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:57.263973   77929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:57.269573   77929 start.go:562] Will wait 60s for crictl version
	I0422 18:24:57.269627   77929 ssh_runner.go:195] Run: which crictl
	I0422 18:24:57.273613   77929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:57.314357   77929 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:57.314463   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.345062   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.380868   77929 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:54.353338   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:56.853757   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:57.382284   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:57.385215   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:57.385655   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385889   77929 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:57.390482   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:57.405644   77929 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:57.405766   77929 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:57.405868   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:57.452528   77929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:57.452604   77929 ssh_runner.go:195] Run: which lz4
	I0422 18:24:57.456903   77929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:57.461373   77929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:57.461411   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:59.060426   77929 crio.go:462] duration metric: took 1.603560712s to copy over tarball
	I0422 18:24:59.060532   77929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:58.854112   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:00.856147   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:01.563849   77929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503290941s)
	I0422 18:25:01.563881   77929 crio.go:469] duration metric: took 2.503413712s to extract the tarball
	I0422 18:25:01.563891   77929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:01.603330   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:01.649885   77929 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:25:01.649909   77929 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:25:01.649916   77929 kubeadm.go:928] updating node { 192.168.61.206 8444 v1.30.0 crio true true} ...
	I0422 18:25:01.650053   77929 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-856422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:01.650143   77929 ssh_runner.go:195] Run: crio config
	I0422 18:25:01.698892   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:01.698915   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:01.698929   77929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:01.698948   77929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.206 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-856422 NodeName:default-k8s-diff-port-856422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:01.699075   77929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.206
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-856422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:01.699150   77929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:01.709830   77929 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:01.709903   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:01.720447   77929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0422 18:25:01.738745   77929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:01.756420   77929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0422 18:25:01.775364   77929 ssh_runner.go:195] Run: grep 192.168.61.206	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:01.779476   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:01.792860   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:01.920607   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:01.939637   77929 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422 for IP: 192.168.61.206
	I0422 18:25:01.939658   77929 certs.go:194] generating shared ca certs ...
	I0422 18:25:01.939675   77929 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:01.939858   77929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:01.939911   77929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:01.939922   77929 certs.go:256] generating profile certs ...
	I0422 18:25:01.940026   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/client.key
	I0422 18:25:01.940105   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key.e8400874
	I0422 18:25:01.940170   77929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key
	I0422 18:25:01.940320   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:01.940386   77929 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:01.940400   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:01.940437   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:01.940474   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:01.940506   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:01.940603   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:01.941408   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:01.981392   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:02.020335   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:02.057221   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:02.088571   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 18:25:02.123716   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:02.153926   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:02.183499   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:02.212438   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:02.238650   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:02.265786   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:02.295001   77929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:02.315343   77929 ssh_runner.go:195] Run: openssl version
	I0422 18:25:02.322001   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:02.334785   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340619   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340686   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.348942   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:02.364960   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:02.381460   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386720   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386794   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.392894   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:02.404951   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:02.417334   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423503   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423573   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.430512   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:02.444132   77929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:02.449749   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:02.456667   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:02.463700   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:02.470474   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:02.477324   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:02.483900   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:02.490614   77929 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:02.490719   77929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:02.490768   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.538766   77929 cri.go:89] found id: ""
	I0422 18:25:02.538849   77929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:02.549686   77929 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:02.549711   77929 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:02.549717   77929 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:02.549794   77929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:02.560594   77929 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:02.561584   77929 kubeconfig.go:125] found "default-k8s-diff-port-856422" server: "https://192.168.61.206:8444"
	I0422 18:25:02.563656   77929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:02.575462   77929 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.206
	I0422 18:25:02.575507   77929 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:02.575522   77929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:02.575606   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.628012   77929 cri.go:89] found id: ""
	I0422 18:25:02.628080   77929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:02.645405   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:02.656723   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:02.656751   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:02.656814   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:25:02.667202   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:02.667269   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:02.678303   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:25:02.688600   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:02.688690   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:02.699963   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.710329   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:02.710393   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.721188   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:25:02.731964   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:02.732040   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:02.743541   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:02.755030   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:02.870301   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:03.995375   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125032803s)
	I0422 18:25:03.995447   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.230252   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.302979   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.395038   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:04.395115   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:03.502023   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.353882   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:04.353905   77634 pod_ready.go:81] duration metric: took 14.00823208s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:04.353915   77634 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:06.363356   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:08.363954   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.896176   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.396048   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.440071   77929 api_server.go:72] duration metric: took 1.045032787s to wait for apiserver process to appear ...
	I0422 18:25:05.440103   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:25:05.440148   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.759542   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.759577   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.759592   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.793255   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.793294   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.940652   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.945611   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:08.945646   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:09.440292   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.464743   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.464770   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:09.940470   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.946872   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.946900   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:10.441291   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:10.445834   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:25:10.452788   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:25:10.452814   77929 api_server.go:131] duration metric: took 5.012704724s to wait for apiserver health ...
	I0422 18:25:10.452823   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:10.452828   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:10.454695   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:25:10.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:13.361234   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:10.456234   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:25:10.469460   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:25:10.510297   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:25:10.527988   77929 system_pods.go:59] 8 kube-system pods found
	I0422 18:25:10.528034   77929 system_pods.go:61] "coredns-7db6d8ff4d-w968m" [1372c3d4-cb23-4f33-911b-57876688fcd4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:25:10.528044   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [af6c3f45-494d-469b-95e0-3d0842d07a70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:25:10.528051   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [665925b4-3073-41c2-86c0-12186f079459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:25:10.528057   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [e8661b67-89c5-43a6-b66e-828f637942e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:25:10.528061   77929 system_pods.go:61] "kube-proxy-4xvx2" [0e662ebe-1f6f-48fe-86c7-595b0bfa4bb6] Running
	I0422 18:25:10.528066   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [e6101593-2ee5-4765-b129-33b3ed7d4c98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:25:10.528075   77929 system_pods.go:61] "metrics-server-569cc877fc-l5qqw" [85eab808-f1f0-4fbc-9c54-1ae307226243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:25:10.528079   77929 system_pods.go:61] "storage-provisioner" [ba8465de-babc-4496-809f-68f6ec917ce8] Running
	I0422 18:25:10.528095   77929 system_pods.go:74] duration metric: took 17.768241ms to wait for pod list to return data ...
	I0422 18:25:10.528104   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:25:10.539169   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:25:10.539202   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:25:10.539214   77929 node_conditions.go:105] duration metric: took 11.105847ms to run NodePressure ...
	I0422 18:25:10.539237   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:10.808687   77929 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:25:10.815993   77929 kubeadm.go:733] kubelet initialised
	I0422 18:25:10.816025   77929 kubeadm.go:734] duration metric: took 7.302574ms waiting for restarted kubelet to initialise ...
	I0422 18:25:10.816037   77929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:25:10.824257   77929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:12.837255   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:16.704684   77400 start.go:364] duration metric: took 53.475319353s to acquireMachinesLock for "no-preload-407991"
	I0422 18:25:16.704741   77400 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:25:16.704752   77400 fix.go:54] fixHost starting: 
	I0422 18:25:16.705132   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:25:16.705166   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:25:16.721711   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0422 18:25:16.722127   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:25:16.722671   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:25:16.722693   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:25:16.723022   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:25:16.723220   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:16.723426   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:25:16.725197   77400 fix.go:112] recreateIfNeeded on no-preload-407991: state=Stopped err=<nil>
	I0422 18:25:16.725231   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	W0422 18:25:16.725430   77400 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:25:16.727275   77400 out.go:177] * Restarting existing kvm2 VM for "no-preload-407991" ...
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:15.362667   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:17.861622   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:14.831683   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:14.831706   77929 pod_ready.go:81] duration metric: took 4.007420508s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:14.831715   77929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343025   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:16.343056   77929 pod_ready.go:81] duration metric: took 1.511333532s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343070   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351244   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:17.351267   77929 pod_ready.go:81] duration metric: took 1.008189798s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351280   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:19.365025   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:16.728745   77400 main.go:141] libmachine: (no-preload-407991) Calling .Start
	I0422 18:25:16.728946   77400 main.go:141] libmachine: (no-preload-407991) Ensuring networks are active...
	I0422 18:25:16.729604   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network default is active
	I0422 18:25:16.729979   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network mk-no-preload-407991 is active
	I0422 18:25:16.730458   77400 main.go:141] libmachine: (no-preload-407991) Getting domain xml...
	I0422 18:25:16.731314   77400 main.go:141] libmachine: (no-preload-407991) Creating domain...
	I0422 18:25:18.079763   77400 main.go:141] libmachine: (no-preload-407991) Waiting to get IP...
	I0422 18:25:18.080862   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.081371   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.081401   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.081340   79353 retry.go:31] will retry after 226.494122ms: waiting for machine to come up
	I0422 18:25:18.309499   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.309914   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.310019   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.309900   79353 retry.go:31] will retry after 375.374338ms: waiting for machine to come up
	I0422 18:25:18.686507   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.687064   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.687093   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.687018   79353 retry.go:31] will retry after 341.714326ms: waiting for machine to come up
	I0422 18:25:19.030772   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.031261   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.031290   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.031229   79353 retry.go:31] will retry after 388.101939ms: waiting for machine to come up
	I0422 18:25:19.420994   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.421478   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.421500   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.421397   79353 retry.go:31] will retry after 732.485222ms: waiting for machine to come up
	I0422 18:25:20.155887   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:20.156717   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:20.156750   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:20.156665   79353 retry.go:31] will retry after 950.207106ms: waiting for machine to come up
	I0422 18:25:19.878966   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.364111   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:21.859384   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.362519   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.362552   77929 pod_ready.go:81] duration metric: took 5.011264858s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.362566   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371087   77929 pod_ready.go:92] pod "kube-proxy-4xvx2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.371112   77929 pod_ready.go:81] duration metric: took 8.534129ms for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371142   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376156   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.376183   77929 pod_ready.go:81] duration metric: took 5.03143ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376196   77929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:24.385435   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:21.108444   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:21.109056   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:21.109082   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:21.109004   79353 retry.go:31] will retry after 958.250136ms: waiting for machine to come up
	I0422 18:25:22.069541   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:22.070120   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:22.070144   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:22.070036   79353 retry.go:31] will retry after 989.607679ms: waiting for machine to come up
	I0422 18:25:23.061351   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:23.061877   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:23.061908   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:23.061823   79353 retry.go:31] will retry after 1.451989455s: waiting for machine to come up
	I0422 18:25:24.515233   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:24.515730   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:24.515755   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:24.515686   79353 retry.go:31] will retry after 2.303903602s: waiting for machine to come up
	I0422 18:25:24.365508   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.861066   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.389132   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:28.883625   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:26.821539   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:26.822006   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:26.822033   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:26.821950   79353 retry.go:31] will retry after 1.870697225s: waiting for machine to come up
	I0422 18:25:28.695072   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:28.695420   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:28.695466   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:28.695386   79353 retry.go:31] will retry after 2.327485176s: waiting for machine to come up
	I0422 18:25:28.861976   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:31.361339   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.883801   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:33.389422   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.024382   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:31.024817   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:31.024845   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:31.024786   79353 retry.go:31] will retry after 2.767538103s: waiting for machine to come up
	I0422 18:25:33.794390   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:33.794834   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:33.794872   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:33.794808   79353 retry.go:31] will retry after 5.661373675s: waiting for machine to come up
	I0422 18:25:33.860276   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.861770   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:38.361316   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.883098   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:37.883749   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.457864   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458407   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has current primary IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458447   77400 main.go:141] libmachine: (no-preload-407991) Found IP for machine: 192.168.39.164
	I0422 18:25:39.458492   77400 main.go:141] libmachine: (no-preload-407991) Reserving static IP address...
	I0422 18:25:39.458954   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.458980   77400 main.go:141] libmachine: (no-preload-407991) DBG | skip adding static IP to network mk-no-preload-407991 - found existing host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"}
	I0422 18:25:39.458992   77400 main.go:141] libmachine: (no-preload-407991) Reserved static IP address: 192.168.39.164
	I0422 18:25:39.459012   77400 main.go:141] libmachine: (no-preload-407991) Waiting for SSH to be available...
	I0422 18:25:39.459027   77400 main.go:141] libmachine: (no-preload-407991) DBG | Getting to WaitForSSH function...
	I0422 18:25:39.461404   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461715   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.461746   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461875   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH client type: external
	I0422 18:25:39.461906   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa (-rw-------)
	I0422 18:25:39.461956   77400 main.go:141] libmachine: (no-preload-407991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:39.461974   77400 main.go:141] libmachine: (no-preload-407991) DBG | About to run SSH command:
	I0422 18:25:39.461992   77400 main.go:141] libmachine: (no-preload-407991) DBG | exit 0
	I0422 18:25:39.591446   77400 main.go:141] libmachine: (no-preload-407991) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:39.591795   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetConfigRaw
	I0422 18:25:39.592473   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.594928   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595379   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.595414   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595632   77400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:25:39.595890   77400 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:39.595914   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:39.596103   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.598532   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.598899   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.598929   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.599071   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.599270   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599450   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599592   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.599728   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.599927   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.599942   77400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:39.712043   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:39.712081   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712336   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:25:39.712363   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712548   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.715474   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.715936   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.715960   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.716089   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.716265   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716396   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716530   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.716656   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.716860   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.716874   77400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407991 && echo "no-preload-407991" | sudo tee /etc/hostname
	I0422 18:25:39.845921   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407991
	
	I0422 18:25:39.845959   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.848790   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849093   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.849121   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849288   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.849495   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849638   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849817   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.850014   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.850183   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.850200   77400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407991/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:39.977389   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:39.977427   77400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:39.977447   77400 buildroot.go:174] setting up certificates
	I0422 18:25:39.977456   77400 provision.go:84] configureAuth start
	I0422 18:25:39.977468   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.977754   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.980800   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981266   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.981305   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981458   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.984031   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984478   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.984510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984654   77400 provision.go:143] copyHostCerts
	I0422 18:25:39.984713   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:39.984725   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:39.984788   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:39.984907   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:39.984918   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:39.984952   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:39.985038   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:39.985048   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:39.985076   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:39.985158   77400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.no-preload-407991 san=[127.0.0.1 192.168.39.164 localhost minikube no-preload-407991]
	I0422 18:25:40.224235   77400 provision.go:177] copyRemoteCerts
	I0422 18:25:40.224306   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:40.224352   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.227355   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.227814   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.227842   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.228035   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.228232   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.228392   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.228560   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.318916   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:40.346168   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:40.371490   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:25:40.396866   77400 provision.go:87] duration metric: took 419.381117ms to configureAuth
	I0422 18:25:40.396899   77400 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:40.397067   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:25:40.397130   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.399642   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400060   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.400095   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400269   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.400466   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400652   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400832   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.401018   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.401176   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.401191   77400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:40.698107   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:40.698140   77400 machine.go:97] duration metric: took 1.102235221s to provisionDockerMachine
	I0422 18:25:40.698153   77400 start.go:293] postStartSetup for "no-preload-407991" (driver="kvm2")
	I0422 18:25:40.698171   77400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:40.698187   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.698497   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:40.698532   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.701545   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.701933   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.701964   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.702070   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.702295   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.702492   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.702727   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.800538   77400 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:40.805027   77400 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:40.805060   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:40.805133   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:40.805216   77400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:40.805304   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:40.816872   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:40.843857   77400 start.go:296] duration metric: took 145.69044ms for postStartSetup
	I0422 18:25:40.843896   77400 fix.go:56] duration metric: took 24.13914409s for fixHost
	I0422 18:25:40.843914   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.846770   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847148   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.847184   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847391   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.847605   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847778   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847966   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.848199   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.848382   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.848396   77400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:40.964440   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810340.939149386
	
	I0422 18:25:40.964473   77400 fix.go:216] guest clock: 1713810340.939149386
	I0422 18:25:40.964483   77400 fix.go:229] Guest: 2024-04-22 18:25:40.939149386 +0000 UTC Remote: 2024-04-22 18:25:40.843899302 +0000 UTC m=+360.205454093 (delta=95.250084ms)
	I0422 18:25:40.964508   77400 fix.go:200] guest clock delta is within tolerance: 95.250084ms
	I0422 18:25:40.964513   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 24.259798286s
	I0422 18:25:40.964535   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.964813   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:40.967510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.967906   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.967932   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.968087   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968610   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968782   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968866   77400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:40.968910   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.969047   77400 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:40.969074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.971818   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972039   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972190   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972203   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972394   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972565   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972580   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972594   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972733   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972791   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.972875   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972948   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.973062   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.973206   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:41.092004   77400 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:41.098574   77400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:41.242800   77400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:41.250454   77400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:41.250521   77400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:41.267380   77400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:41.267408   77400 start.go:494] detecting cgroup driver to use...
	I0422 18:25:41.267478   77400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:41.284742   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:41.299527   77400 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:41.299596   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:41.314189   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:41.329444   77400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:41.456719   77400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:41.628305   77400 docker.go:233] disabling docker service ...
	I0422 18:25:41.628376   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:41.643226   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:41.657578   77400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:41.780449   77400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:41.898823   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:41.913578   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:41.933621   77400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:25:41.933679   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.944309   77400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:41.944382   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.955308   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.966445   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.977509   77400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:41.989479   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.001915   77400 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.020554   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.033225   77400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:42.044177   77400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:42.044231   77400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:42.060403   77400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:42.071760   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:42.213747   77400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:42.361818   77400 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:42.361911   77400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:42.367211   77400 start.go:562] Will wait 60s for crictl version
	I0422 18:25:42.367265   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.371042   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:42.408686   77400 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:42.408773   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.438447   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.469117   77400 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:25:40.862849   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.361826   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:39.884361   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:41.885199   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.885865   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.470665   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:42.473467   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.473845   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:42.473871   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.474121   77400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:42.478401   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:42.491034   77400 kubeadm.go:877] updating cluster {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:42.491163   77400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:25:42.491203   77400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:42.530418   77400 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:25:42.530443   77400 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.530585   77400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.530641   77400 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0422 18:25:42.530601   77400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.530609   77400 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.530622   77400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.530626   77400 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532108   77400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.532136   77400 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0422 18:25:42.532111   77400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.532113   77400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.532175   77400 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532197   77400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.532223   77400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.532506   77400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.735366   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.750777   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0422 18:25:42.758260   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.759633   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.763447   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.765416   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.803799   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.832904   77400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0422 18:25:42.832959   77400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.833021   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981471   77400 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0422 18:25:42.981528   77400 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.981553   77400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0422 18:25:42.981584   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981592   77400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.981635   77400 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0422 18:25:42.981663   77400 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.981687   77400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0422 18:25:42.981699   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981642   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981716   77400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.981770   77400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0422 18:25:42.981776   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981788   77400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.981820   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981846   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:43.021364   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0422 18:25:43.021416   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:43.021455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.021460   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:43.021529   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:43.021534   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:43.021585   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:43.130300   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0422 18:25:43.130373   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0422 18:25:43.130408   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:43.130425   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0422 18:25:43.130455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:43.130514   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:43.134769   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0422 18:25:43.134785   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0422 18:25:43.134797   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134839   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134853   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:43.134882   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0422 18:25:43.134959   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:43.142273   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0422 18:25:43.142486   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0422 18:25:43.142837   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0422 18:25:43.840108   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210614   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.075740127s)
	I0422 18:25:45.210650   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0422 18:25:45.210655   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.075789371s)
	I0422 18:25:45.210676   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0422 18:25:45.210693   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.075715404s)
	I0422 18:25:45.210699   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210706   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0422 18:25:45.210748   77400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.370610047s)
	I0422 18:25:45.210785   77400 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0422 18:25:45.210750   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210842   77400 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210969   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:45.363082   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:47.861802   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:46.383938   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:48.385209   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.203063   77400 ssh_runner.go:235] Completed: which crictl: (2.992066474s)
	I0422 18:25:48.203106   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.992228832s)
	I0422 18:25:48.203143   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0422 18:25:48.203159   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:48.203171   77400 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:48.203210   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:49.863963   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:52.370507   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.883608   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:53.386229   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.419429   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.216195193s)
	I0422 18:25:52.419462   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0422 18:25:52.419474   77400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.216288559s)
	I0422 18:25:52.419488   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419513   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0422 18:25:52.419537   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419581   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:52.424638   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0422 18:25:53.873720   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.454157304s)
	I0422 18:25:53.873750   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0422 18:25:53.873780   77400 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:53.873825   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:54.860810   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:56.864272   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.388103   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:57.887970   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.955181   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.081308071s)
	I0422 18:25:55.955210   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0422 18:25:55.955236   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:55.955300   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:58.218734   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.263410883s)
	I0422 18:25:58.218762   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0422 18:25:58.218792   77400 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:58.218843   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:59.071398   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0422 18:25:59.071443   77400 cache_images.go:123] Successfully loaded all cached images
	I0422 18:25:59.071450   77400 cache_images.go:92] duration metric: took 16.54097573s to LoadCachedImages
	I0422 18:25:59.071463   77400 kubeadm.go:928] updating node { 192.168.39.164 8443 v1.30.0 crio true true} ...
	I0422 18:25:59.071610   77400 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-407991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:59.071698   77400 ssh_runner.go:195] Run: crio config
	I0422 18:25:59.125757   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:25:59.125783   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:59.125800   77400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:59.125832   77400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-407991 NodeName:no-preload-407991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:59.126001   77400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-407991"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:59.126073   77400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:59.137254   77400 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:59.137320   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:59.146983   77400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0422 18:25:59.165207   77400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:59.182898   77400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0422 18:25:59.201735   77400 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:59.206108   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:59.219642   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:59.336565   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:59.356844   77400 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991 for IP: 192.168.39.164
	I0422 18:25:59.356873   77400 certs.go:194] generating shared ca certs ...
	I0422 18:25:59.356893   77400 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:59.357058   77400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:59.357121   77400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:59.357133   77400 certs.go:256] generating profile certs ...
	I0422 18:25:59.357209   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.key
	I0422 18:25:59.357329   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key.6aa1268b
	I0422 18:25:59.357413   77400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key
	I0422 18:25:59.357574   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:59.357616   77400 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:59.357631   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:59.357672   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:59.357707   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:59.357745   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:59.357823   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:59.358765   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:59.395982   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:59.430445   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:59.465415   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:59.502678   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:25:59.538225   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:25:59.570635   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:59.596096   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:59.622051   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:59.647372   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:59.673650   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:59.699515   77400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:59.717253   77400 ssh_runner.go:195] Run: openssl version
	I0422 18:25:59.723704   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:59.735265   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740264   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740319   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.746445   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:59.757879   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:59.769243   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774505   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774562   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.780572   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:59.793472   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:59.805187   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810148   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810191   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.816350   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:59.828208   77400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:59.832799   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:59.838952   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:59.845145   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:59.851309   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:59.857643   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:59.864892   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:59.873625   77400 kubeadm.go:391] StartCluster: {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:59.873749   77400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:59.873826   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.913578   77400 cri.go:89] found id: ""
	I0422 18:25:59.913656   77400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:59.925105   77400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:59.925131   77400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:59.925138   77400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:59.925192   77400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:59.935942   77400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:59.937363   77400 kubeconfig.go:125] found "no-preload-407991" server: "https://192.168.39.164:8443"
	I0422 18:25:59.939672   77400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:59.949774   77400 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.164
	I0422 18:25:59.949810   77400 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:59.949841   77400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:59.949896   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.989385   77400 cri.go:89] found id: ""
	I0422 18:25:59.989443   77400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:26:00.005985   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:26:00.016873   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:26:00.016897   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:26:00.016953   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:26:00.027119   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:26:00.027205   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:26:00.038360   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:26:00.048176   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:26:00.048246   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:26:00.058861   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.068955   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:26:00.069018   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.079147   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:26:00.089400   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:26:00.089477   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:26:00.100245   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:26:00.111040   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:00.224436   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:59.362215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:01.860196   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.388433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:02.883211   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.838456   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.057201   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.143346   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.294896   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:26:01.295031   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.795945   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.296085   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.324434   77400 api_server.go:72] duration metric: took 1.029539423s to wait for apiserver process to appear ...
	I0422 18:26:02.324467   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:26:02.324490   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.784948   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:26:04.784984   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:26:04.784997   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.844019   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.844064   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:04.844084   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.848805   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.848838   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.325458   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.332351   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.332410   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.824785   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.830293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.830318   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:06.325380   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:06.332804   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:26:06.344083   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:26:06.344110   77400 api_server.go:131] duration metric: took 4.019636154s to wait for apiserver health ...
	I0422 18:26:06.344118   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:26:06.344123   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:26:06.345875   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:26:03.863020   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:06.360428   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:04.884648   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:07.382356   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:09.388391   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.347812   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:26:06.361087   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:26:06.385654   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:26:06.398331   77400 system_pods.go:59] 8 kube-system pods found
	I0422 18:26:06.398372   77400 system_pods.go:61] "coredns-7db6d8ff4d-2p2sr" [3f42ce46-e76d-4bc8-9dd5-463a08948e4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:26:06.398384   77400 system_pods.go:61] "etcd-no-preload-407991" [96ae7feb-802f-44a8-81fc-5ea5de12e73b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:26:06.398396   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [28010e33-49a1-4c6b-90f9-939ede3ed97e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:26:06.398404   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [1e7db029-2196-499f-bc88-d780d065f80c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:26:06.398415   77400 system_pods.go:61] "kube-proxy-767q4" [1c6d01b0-caf0-4d52-8da8-caad7b158012] Running
	I0422 18:26:06.398426   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [3ef8d145-d90e-455d-98fe-de9e6080a178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:26:06.398433   77400 system_pods.go:61] "metrics-server-569cc877fc-jmjhm" [d831b01b-af2e-4c7f-944c-e768d724ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:26:06.398439   77400 system_pods.go:61] "storage-provisioner" [db8196df-a394-4e10-9db7-c10414833af3] Running
	I0422 18:26:06.398447   77400 system_pods.go:74] duration metric: took 12.770066ms to wait for pod list to return data ...
	I0422 18:26:06.398455   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:26:06.402125   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:26:06.402158   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:26:06.402170   77400 node_conditions.go:105] duration metric: took 3.709194ms to run NodePressure ...
	I0422 18:26:06.402195   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:06.676133   77400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680247   77400 kubeadm.go:733] kubelet initialised
	I0422 18:26:06.680269   77400 kubeadm.go:734] duration metric: took 4.114413ms waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680276   77400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:26:06.687275   77400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.693967   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.693986   77400 pod_ready.go:81] duration metric: took 6.687466ms for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.694004   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.694012   77400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.698539   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698562   77400 pod_ready.go:81] duration metric: took 4.539271ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.698571   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698578   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.703382   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703407   77400 pod_ready.go:81] duration metric: took 4.822601ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.703418   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703425   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.789413   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789449   77400 pod_ready.go:81] duration metric: took 86.014056ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.789459   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789465   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189544   77400 pod_ready.go:92] pod "kube-proxy-767q4" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:07.189572   77400 pod_ready.go:81] duration metric: took 400.096716ms for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189585   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:09.201757   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:08.861714   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.359820   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.362303   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.883726   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:14.382966   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.697196   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.697458   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.861378   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:17.861808   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:16.385523   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:18.883000   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.697079   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:15.697104   77400 pod_ready.go:81] duration metric: took 8.507511233s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:15.697116   77400 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:17.704095   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.204276   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.360946   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:22.861202   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.883107   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:23.383119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.204697   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.703926   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.861797   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.361089   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.384161   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.386172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:27.204128   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.207057   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.364913   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.883085   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.883536   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.883927   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:31.704123   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:34.206443   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.863261   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.360525   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.361132   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.385507   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.882649   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:36.704473   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:39.204839   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.863837   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.362000   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.887004   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.384351   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:41.704418   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.704878   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.362073   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.860279   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.385285   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.882420   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:45.706474   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:48.205681   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.860748   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.360586   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.884553   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:51.885527   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.387502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:50.703624   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.704794   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.204187   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.861314   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:57.360430   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:56.882974   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:58.884770   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:57.703613   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:00.205449   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:59.861376   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.862613   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.386313   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:03.883728   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:02.205610   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.704158   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.359865   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:06.359968   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.360935   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:05.884072   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.386493   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:07.203802   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:09.204754   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.862854   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.361215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.883779   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:12.883989   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.205525   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.706785   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:15.861009   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.861214   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.884371   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.384911   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:16.204000   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:18.704622   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.864836   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:22.360984   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.883393   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:21.885743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.384476   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.704821   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:23.203548   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:25.204601   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.361665   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.361708   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.388979   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.882505   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:27.714190   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.204420   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.861728   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:31.360750   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.360969   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.883051   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.386563   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:32.704424   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:34.705215   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.861184   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.361048   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.883602   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.383601   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:37.203082   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.204563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.860739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.861544   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.385271   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.389273   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:41.705615   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.203947   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.361656   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:47.860929   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.882684   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:46.886119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:49.382017   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:46.703083   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:48.705413   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:50.361465   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:52.364509   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.384561   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.882657   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.203623   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.205834   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.863919   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.360796   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:56.383743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:58.383783   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.703110   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.704061   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.704421   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.361229   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:01.362273   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.385750   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:02.886349   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:02.203877   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:04.204896   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:03.860506   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.860713   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.384180   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.884714   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:06.206560   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.703406   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.364348   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.861760   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.361127   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.392330   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:12.883081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:10.705202   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.203760   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.205898   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.361423   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:17.363341   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:14.883352   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.886433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.382478   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:17.703917   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.704958   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.861537   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.862949   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.882158   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:23.885280   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.204356   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.703765   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.361251   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:26.862825   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.886291   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:28.382905   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.705590   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.205641   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.361439   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:31.361534   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:30.383502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.887006   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:31.704145   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.704841   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.860769   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.860833   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.861583   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.382871   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.383164   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:36.204210   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:38.205076   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:40.360574   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.862692   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:39.882407   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.882939   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:43.883102   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:40.703806   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.704426   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.707600   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.361890   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.860686   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.883300   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.884863   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:47.207153   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.704440   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.863696   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.360739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.384172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.386468   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.704563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.206032   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.362075   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.861096   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.883667   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:57.386081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:56.709043   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.203924   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:58.861932   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.360398   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.360867   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.882692   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.883267   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.383872   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:01.204827   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.204916   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.355065   77634 pod_ready.go:81] duration metric: took 4m0.0011361s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:04.355113   77634 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:04.355148   77634 pod_ready.go:38] duration metric: took 4m14.498231749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:04.355180   77634 kubeadm.go:591] duration metric: took 4m21.764385121s to restartPrimaryControlPlane
	W0422 18:29:04.355236   77634 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:04.355261   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:06.385395   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:08.883604   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:05.703491   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:07.704534   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.705814   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:11.383569   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:13.883521   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:11.706608   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:14.204779   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:16.384284   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.884060   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.704652   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.704899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:21.391438   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:22.376750   77929 pod_ready.go:81] duration metric: took 4m0.000534542s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:22.376787   77929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:22.376811   77929 pod_ready.go:38] duration metric: took 4m11.560762914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:22.376844   77929 kubeadm.go:591] duration metric: took 4m19.827120959s to restartPrimaryControlPlane
	W0422 18:29:22.376929   77929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:22.376953   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.203699   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:23.204704   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:25.206832   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:27.706178   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:30.203899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:29:32.204107   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:34.707228   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:36.541281   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.185998541s)
	I0422 18:29:36.541367   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:36.558729   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:36.569635   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:36.579901   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:36.579919   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:36.579959   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:36.589540   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:36.589602   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:36.600704   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:36.610945   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:36.611012   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:36.621316   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.631251   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:36.631305   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.641661   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:36.650970   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:36.651049   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:36.661012   77634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:36.717676   77634 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:36.717771   77634 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:36.861264   77634 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:36.861404   77634 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:36.861534   77634 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:37.083032   77634 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:37.084958   77634 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:37.085069   77634 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:37.085179   77634 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:37.085296   77634 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:37.085387   77634 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:37.085505   77634 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:37.085579   77634 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:37.085665   77634 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:37.085748   77634 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:37.085869   77634 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:37.085985   77634 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:37.086037   77634 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:37.086114   77634 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:37.337747   77634 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:37.538036   77634 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:37.630303   77634 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:37.755713   77634 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:38.081451   77634 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:38.082265   77634 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:38.084958   77634 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:38.086755   77634 out.go:204]   - Booting up control plane ...
	I0422 18:29:38.086893   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:38.087023   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:38.089714   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:38.108313   77634 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:38.108786   77634 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:38.108849   77634 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:38.241537   77634 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:38.241681   77634 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:37.203550   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:39.205619   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:38.743798   77634 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.847818ms
	I0422 18:29:38.743910   77634 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:44.245440   77634 kubeadm.go:309] [api-check] The API server is healthy after 5.501913498s
	I0422 18:29:44.265283   77634 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:29:44.280940   77634 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:29:44.318688   77634 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:29:44.318990   77634 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-782377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:29:44.332201   77634 kubeadm.go:309] [bootstrap-token] Using token: o52gh5.f6sjmkidroy1sl61
	I0422 18:29:44.333546   77634 out.go:204]   - Configuring RBAC rules ...
	I0422 18:29:44.333670   77634 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:29:44.342847   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:29:44.350983   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:29:44.354214   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:29:44.361351   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:29:44.365170   77634 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:29:44.654414   77634 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:29:45.170247   77634 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:29:45.654714   77634 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:29:45.654744   77634 kubeadm.go:309] 
	I0422 18:29:45.654847   77634 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:29:45.654871   77634 kubeadm.go:309] 
	I0422 18:29:45.654984   77634 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:29:45.654996   77634 kubeadm.go:309] 
	I0422 18:29:45.655028   77634 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:29:45.655108   77634 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:29:45.655201   77634 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:29:45.655211   77634 kubeadm.go:309] 
	I0422 18:29:45.655308   77634 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:29:45.655317   77634 kubeadm.go:309] 
	I0422 18:29:45.655395   77634 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:29:45.655414   77634 kubeadm.go:309] 
	I0422 18:29:45.655486   77634 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:29:45.655597   77634 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:29:45.655700   77634 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:29:45.655714   77634 kubeadm.go:309] 
	I0422 18:29:45.655824   77634 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:29:45.655951   77634 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:29:45.655963   77634 kubeadm.go:309] 
	I0422 18:29:45.656067   77634 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656226   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:29:45.656258   77634 kubeadm.go:309] 	--control-plane 
	I0422 18:29:45.656265   77634 kubeadm.go:309] 
	I0422 18:29:45.656383   77634 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:29:45.656394   77634 kubeadm.go:309] 
	I0422 18:29:45.656513   77634 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656602   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:29:45.657124   77634 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:29:45.657152   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:29:45.657168   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:29:45.658873   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:29:41.705450   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:44.205661   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:45.660184   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:29:45.671834   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:29:45.693947   77634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:29:45.694034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:45.694054   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-782377 minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=embed-certs-782377 minikube.k8s.io/primary=true
	I0422 18:29:45.901437   77634 ops.go:34] apiserver oom_adj: -16
	I0422 18:29:45.901443   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.402050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.902222   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.402527   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.901535   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.206598   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.703899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.401738   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:48.902497   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.402046   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.901756   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.402023   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.901600   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.401905   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.901739   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.401859   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.902155   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.661872   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.28489375s)
	I0422 18:29:54.661952   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:54.679790   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:54.689947   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:54.700173   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:54.700191   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:54.700230   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:29:54.711462   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:54.711519   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:54.721157   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:29:54.730698   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:54.730769   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:54.740596   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.750450   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:54.750521   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.760582   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:29:54.770551   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:54.770608   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:54.781181   77929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:54.834872   77929 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:54.834950   77929 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:54.982435   77929 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:54.982574   77929 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:54.982675   77929 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:55.208724   77929 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:50.704498   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:53.203270   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.206485   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.210946   77929 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:55.211072   77929 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:55.211180   77929 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:55.211326   77929 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:55.211425   77929 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:55.211546   77929 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:55.211655   77929 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:55.211746   77929 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:55.211831   77929 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:55.211932   77929 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:55.212028   77929 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:55.212076   77929 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:55.212150   77929 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:55.456090   77929 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:55.747103   77929 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:55.940962   77929 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:56.076850   77929 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:56.253326   77929 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:56.253921   77929 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:56.259311   77929 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:53.402196   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:53.902328   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.402353   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.901736   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.401514   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.902415   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.402371   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.902117   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.401817   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.902050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.402034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.574005   77634 kubeadm.go:1107] duration metric: took 12.880033802s to wait for elevateKubeSystemPrivileges
	W0422 18:29:58.574051   77634 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:29:58.574061   77634 kubeadm.go:393] duration metric: took 5m16.036878933s to StartCluster
	I0422 18:29:58.574083   77634 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.574173   77634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:29:58.576621   77634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.576908   77634 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:29:58.578444   77634 out.go:177] * Verifying Kubernetes components...
	I0422 18:29:58.576967   77634 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:29:58.577120   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:29:58.579836   77634 addons.go:69] Setting default-storageclass=true in profile "embed-certs-782377"
	I0422 18:29:58.579846   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:29:58.579850   77634 addons.go:69] Setting metrics-server=true in profile "embed-certs-782377"
	I0422 18:29:58.579873   77634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-782377"
	I0422 18:29:58.579896   77634 addons.go:234] Setting addon metrics-server=true in "embed-certs-782377"
	W0422 18:29:58.579910   77634 addons.go:243] addon metrics-server should already be in state true
	I0422 18:29:58.579952   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.579841   77634 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-782377"
	I0422 18:29:58.580057   77634 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-782377"
	W0422 18:29:58.580070   77634 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:29:58.580099   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.580279   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580284   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580301   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580308   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580460   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580488   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.603276   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0422 18:29:58.603459   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0422 18:29:58.603483   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 18:29:58.607248   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607265   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607392   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607836   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.607853   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.607983   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.608001   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.608344   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608373   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608505   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.608932   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.608963   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612034   77634 addons.go:234] Setting addon default-storageclass=true in "embed-certs-782377"
	W0422 18:29:58.612056   77634 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:29:58.612084   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.612467   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.612485   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612786   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.612802   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.613185   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.613700   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.613728   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.630170   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0422 18:29:58.630586   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.631061   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.631081   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.631523   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.631693   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.631847   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0422 18:29:58.632457   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.632941   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.632966   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.633179   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0422 18:29:58.633322   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.633567   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.633688   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.635830   77634 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:29:58.633856   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.634354   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.636961   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.637004   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:29:58.637027   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:29:58.637045   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.637006   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.637294   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.637508   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.639287   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.640999   77634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:29:58.640236   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:56.261447   77929 out.go:204]   - Booting up control plane ...
	I0422 18:29:56.261539   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:56.261635   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:56.261736   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:56.285519   77929 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:56.285675   77929 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:56.285752   77929 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:56.437635   77929 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:56.437767   77929 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:56.944001   77929 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 506.500244ms
	I0422 18:29:56.944104   77929 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:58.640741   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.642428   77634 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.641034   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.642448   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:29:58.642456   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.642470   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.642590   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.642733   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.642860   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.645684   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646424   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.646469   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646728   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.646929   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.647079   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.647331   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.657385   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 18:29:58.658062   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.658658   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.658676   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.659065   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.659314   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.661001   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.661274   77634 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:58.661292   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:29:58.661309   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.664551   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.665029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665185   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.665397   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.665560   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.665692   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.840086   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:29:58.872963   77634 node_ready.go:35] waiting up to 6m0s for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882942   77634 node_ready.go:49] node "embed-certs-782377" has status "Ready":"True"
	I0422 18:29:58.882978   77634 node_ready.go:38] duration metric: took 9.978929ms for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882990   77634 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:58.892484   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:29:58.964679   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.987690   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:59.001748   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:29:59.001776   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:29:59.095009   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:29:59.095039   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:29:59.242427   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.242451   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:29:59.321464   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.989825   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025095721s)
	I0422 18:29:59.989883   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.989895   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.989828   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.002098611s)
	I0422 18:29:59.989974   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990005   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990193   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990231   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990239   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990247   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990254   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990306   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990341   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990355   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990369   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990380   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990504   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990523   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990572   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990588   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.025628   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.025655   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.025970   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.025991   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.434245   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.434287   77634 pod_ready.go:81] duration metric: took 1.54176792s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.434301   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454521   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.454545   77634 pod_ready.go:81] duration metric: took 20.235494ms for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454557   77634 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.473166   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.151631277s)
	I0422 18:30:00.473225   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473266   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473625   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.473660   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.473683   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.473706   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473719   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473998   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.474079   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.474098   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.474114   77634 addons.go:470] Verifying addon metrics-server=true in "embed-certs-782377"
	I0422 18:30:00.476224   77634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:29:57.706757   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.206098   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.477945   77634 addons.go:505] duration metric: took 1.900979481s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:30:00.493925   77634 pod_ready.go:92] pod "etcd-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.493956   77634 pod_ready.go:81] duration metric: took 39.391277ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.493971   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502733   77634 pod_ready.go:92] pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.502762   77634 pod_ready.go:81] duration metric: took 8.782315ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502776   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517227   77634 pod_ready.go:92] pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.517249   77634 pod_ready.go:81] duration metric: took 14.465418ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517260   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881221   77634 pod_ready.go:92] pod "kube-proxy-6qsdm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.881245   77634 pod_ready.go:81] duration metric: took 363.979231ms for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881254   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277017   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:01.277103   77634 pod_ready.go:81] duration metric: took 395.840808ms for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277125   77634 pod_ready.go:38] duration metric: took 2.394112246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:01.277153   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:01.277240   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:01.295278   77634 api_server.go:72] duration metric: took 2.718332063s to wait for apiserver process to appear ...
	I0422 18:30:01.295316   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:01.295345   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:30:01.299754   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:30:01.300888   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:01.300912   77634 api_server.go:131] duration metric: took 5.588825ms to wait for apiserver health ...
	I0422 18:30:01.300920   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:01.480184   77634 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:01.480216   77634 system_pods.go:61] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.480220   77634 system_pods.go:61] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.480224   77634 system_pods.go:61] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.480227   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.480231   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.480234   77634 system_pods.go:61] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.480237   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.480243   77634 system_pods.go:61] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.480246   77634 system_pods.go:61] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.480253   77634 system_pods.go:74] duration metric: took 179.327678ms to wait for pod list to return data ...
	I0422 18:30:01.480260   77634 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:01.676749   77634 default_sa.go:45] found service account: "default"
	I0422 18:30:01.676792   77634 default_sa.go:55] duration metric: took 196.525393ms for default service account to be created ...
	I0422 18:30:01.676805   77634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:01.881811   77634 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:01.881846   77634 system_pods.go:89] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.881852   77634 system_pods.go:89] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.881856   77634 system_pods.go:89] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.881861   77634 system_pods.go:89] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.881866   77634 system_pods.go:89] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.881871   77634 system_pods.go:89] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.881875   77634 system_pods.go:89] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.881884   77634 system_pods.go:89] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.881891   77634 system_pods.go:89] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.881902   77634 system_pods.go:126] duration metric: took 205.08856ms to wait for k8s-apps to be running ...
	I0422 18:30:01.881915   77634 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:01.881971   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:01.898653   77634 system_svc.go:56] duration metric: took 16.727076ms WaitForService to wait for kubelet
	I0422 18:30:01.898688   77634 kubeadm.go:576] duration metric: took 3.321747224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:01.898716   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:02.079527   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:02.079552   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:02.079567   77634 node_conditions.go:105] duration metric: took 180.844523ms to run NodePressure ...
	I0422 18:30:02.079581   77634 start.go:240] waiting for startup goroutines ...
	I0422 18:30:02.079590   77634 start.go:245] waiting for cluster config update ...
	I0422 18:30:02.079603   77634 start.go:254] writing updated cluster config ...
	I0422 18:30:02.079881   77634 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:02.131965   77634 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:02.133816   77634 out.go:177] * Done! kubectl is now configured to use "embed-certs-782377" cluster and "default" namespace by default
	I0422 18:30:02.446649   77929 kubeadm.go:309] [api-check] The API server is healthy after 5.502662802s
	I0422 18:30:02.466311   77929 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:02.504029   77929 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:02.586946   77929 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:02.587250   77929 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-856422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:02.600362   77929 kubeadm.go:309] [bootstrap-token] Using token: f03yx2.2vmzf4rav70vm6gm
	I0422 18:30:02.601830   77929 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:02.601961   77929 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:02.608688   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:02.621264   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:02.625695   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:02.630424   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:02.639203   77929 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:02.856167   77929 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:03.309505   77929 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:03.855419   77929 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:03.855443   77929 kubeadm.go:309] 
	I0422 18:30:03.855541   77929 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:03.855567   77929 kubeadm.go:309] 
	I0422 18:30:03.855643   77929 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:03.855653   77929 kubeadm.go:309] 
	I0422 18:30:03.855688   77929 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:03.855756   77929 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:03.855841   77929 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:03.855854   77929 kubeadm.go:309] 
	I0422 18:30:03.855909   77929 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:03.855915   77929 kubeadm.go:309] 
	I0422 18:30:03.855954   77929 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:03.855960   77929 kubeadm.go:309] 
	I0422 18:30:03.856051   77929 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:03.856171   77929 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:03.856248   77929 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:03.856259   77929 kubeadm.go:309] 
	I0422 18:30:03.856390   77929 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:03.856484   77929 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:03.856496   77929 kubeadm.go:309] 
	I0422 18:30:03.856636   77929 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.856729   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:03.856749   77929 kubeadm.go:309] 	--control-plane 
	I0422 18:30:03.856755   77929 kubeadm.go:309] 
	I0422 18:30:03.856823   77929 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:03.856829   77929 kubeadm.go:309] 
	I0422 18:30:03.856911   77929 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.857040   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:03.857540   77929 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:03.857569   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:30:03.857583   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:03.859350   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:03.860736   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:03.873189   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:03.897193   77929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:03.897260   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:03.897317   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-856422 minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=default-k8s-diff-port-856422 minikube.k8s.io/primary=true
	I0422 18:30:04.114339   77929 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:04.114499   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:02.703452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.705502   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.615355   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.115530   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.614776   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.114991   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.614772   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.114921   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.614799   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.115218   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.614688   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:09.114578   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.203762   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.704636   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.615201   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.115526   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.614511   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.115041   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.615220   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.115463   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.614937   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.115470   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.615417   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:14.114916   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:11.706452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.203931   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.614582   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.115466   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.615542   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.115554   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.614586   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.114645   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.614945   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.769793   77929 kubeadm.go:1107] duration metric: took 13.872592974s to wait for elevateKubeSystemPrivileges
	W0422 18:30:17.769857   77929 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:30:17.769869   77929 kubeadm.go:393] duration metric: took 5m15.279261637s to StartCluster
	I0422 18:30:17.769889   77929 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.769958   77929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:30:17.771921   77929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.772222   77929 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:30:17.774219   77929 out.go:177] * Verifying Kubernetes components...
	I0422 18:30:17.772365   77929 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:30:17.772496   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:30:17.776231   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:30:17.776249   77929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776267   77929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776294   77929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776307   77929 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:30:17.776321   77929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-856422"
	I0422 18:30:17.776343   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776284   77929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776412   77929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776430   77929 addons.go:243] addon metrics-server should already be in state true
	I0422 18:30:17.776469   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776775   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776809   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776778   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776846   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776777   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776926   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.796665   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0422 18:30:17.796701   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0422 18:30:17.796976   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0422 18:30:17.797083   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797472   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797609   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797795   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.797824   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798111   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798141   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798158   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798499   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798627   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798648   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798728   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.798776   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799001   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.799077   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.799107   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799274   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.803095   77929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.803141   77929 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:30:17.803175   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.803544   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.803580   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.820753   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0422 18:30:17.821272   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.821822   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.821839   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.822247   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.822315   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0422 18:30:17.822640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.823287   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0422 18:30:17.823373   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.823976   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.824141   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824152   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824479   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824498   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824561   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.824727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.825176   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.825646   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.825675   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.826014   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.828122   77929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:30:17.826808   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.829694   77929 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:17.829711   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:30:17.829729   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.831322   77929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:30:17.834942   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:30:17.834959   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:30:17.834979   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.833531   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.832894   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835054   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.835078   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.835468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.835674   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.837838   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838180   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.838204   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838459   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.838656   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.838827   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.838983   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.844804   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0422 18:30:17.845252   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.845762   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.845780   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.846071   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.846240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.847881   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.848127   77929 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:17.848142   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:30:17.848159   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.850959   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851369   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.851389   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.851786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.851918   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.852081   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.997608   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:30:18.066476   77929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.139937   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:18.141619   77929 node_ready.go:49] node "default-k8s-diff-port-856422" has status "Ready":"True"
	I0422 18:30:18.141645   77929 node_ready.go:38] duration metric: took 75.13675ms for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.141658   77929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:18.168289   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:18.217351   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:30:18.217374   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:30:18.280089   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:18.283704   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:30:18.283734   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:30:18.314907   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.314936   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:30:18.379950   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.595931   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.595969   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596350   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596374   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.596389   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596660   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596699   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596722   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610244   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.610277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.610614   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.610635   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610659   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:19.513892   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233747961s)
	I0422 18:30:19.513948   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.513961   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514326   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.514460   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.514491   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.514506   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514414   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517601   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.517617   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.805551   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425552646s)
	I0422 18:30:19.805610   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.805621   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.805986   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.806040   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.806064   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.806083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.807818   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.807865   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.807874   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.807889   77929 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-856422"
	I0422 18:30:19.809871   77929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0422 18:30:15.697614   77400 pod_ready.go:81] duration metric: took 4m0.000479463s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	E0422 18:30:15.697661   77400 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:30:15.697678   77400 pod_ready.go:38] duration metric: took 4m9.017394523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:15.697704   77400 kubeadm.go:591] duration metric: took 4m15.772560858s to restartPrimaryControlPlane
	W0422 18:30:15.697751   77400 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:30:15.697777   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:30:19.811644   77929 addons.go:505] duration metric: took 2.039289124s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0422 18:30:20.174912   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:20.675213   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.675247   77929 pod_ready.go:81] duration metric: took 2.506921343s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.675261   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681665   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.681690   77929 pod_ready.go:81] duration metric: took 6.421217ms for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681700   77929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687893   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.687926   77929 pod_ready.go:81] duration metric: took 6.218166ms for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687941   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696603   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.696634   77929 pod_ready.go:81] duration metric: took 8.684682ms for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696649   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702776   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.702800   77929 pod_ready.go:81] duration metric: took 6.141484ms for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702813   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073451   77929 pod_ready.go:92] pod "kube-proxy-4m8cm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.073485   77929 pod_ready.go:81] duration metric: took 370.663669ms for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073500   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474144   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.474175   77929 pod_ready.go:81] duration metric: took 400.665802ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474190   77929 pod_ready.go:38] duration metric: took 3.332515716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:21.474207   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:21.474273   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:21.491320   77929 api_server.go:72] duration metric: took 3.719060391s to wait for apiserver process to appear ...
	I0422 18:30:21.491352   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:21.491378   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:30:21.496589   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:30:21.497405   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:21.497426   77929 api_server.go:131] duration metric: took 6.067469ms to wait for apiserver health ...
	I0422 18:30:21.497433   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:21.675885   77929 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:21.675912   77929 system_pods.go:61] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:21.675916   77929 system_pods.go:61] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:21.675924   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:21.675928   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:21.675932   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:21.675935   77929 system_pods.go:61] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:21.675939   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:21.675945   77929 system_pods.go:61] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:21.675949   77929 system_pods.go:61] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:21.675959   77929 system_pods.go:74] duration metric: took 178.519985ms to wait for pod list to return data ...
	I0422 18:30:21.675965   77929 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:21.872358   77929 default_sa.go:45] found service account: "default"
	I0422 18:30:21.872382   77929 default_sa.go:55] duration metric: took 196.412252ms for default service account to be created ...
	I0422 18:30:21.872391   77929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:22.075660   77929 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:22.075689   77929 system_pods.go:89] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:22.075694   77929 system_pods.go:89] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:22.075698   77929 system_pods.go:89] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:22.075702   77929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:22.075706   77929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:22.075710   77929 system_pods.go:89] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:22.075714   77929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:22.075722   77929 system_pods.go:89] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:22.075726   77929 system_pods.go:89] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:22.075735   77929 system_pods.go:126] duration metric: took 203.339608ms to wait for k8s-apps to be running ...
	I0422 18:30:22.075742   77929 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:22.075785   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:22.091186   77929 system_svc.go:56] duration metric: took 15.433207ms WaitForService to wait for kubelet
	I0422 18:30:22.091219   77929 kubeadm.go:576] duration metric: took 4.318966383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:22.091237   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:22.272944   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:22.272971   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:22.272980   77929 node_conditions.go:105] duration metric: took 181.734735ms to run NodePressure ...
	I0422 18:30:22.272991   77929 start.go:240] waiting for startup goroutines ...
	I0422 18:30:22.273000   77929 start.go:245] waiting for cluster config update ...
	I0422 18:30:22.273010   77929 start.go:254] writing updated cluster config ...
	I0422 18:30:22.273248   77929 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:22.323725   77929 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:22.325876   77929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-856422" cluster and "default" namespace by default
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.109960   77400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.41215685s)
	I0422 18:30:48.110037   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:48.127246   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:30:48.138280   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:30:48.148521   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:30:48.148545   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:30:48.148588   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:30:48.160411   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:30:48.160483   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:30:48.170748   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:30:48.180399   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:30:48.180451   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:30:48.192521   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.202200   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:30:48.202274   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.212241   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:30:48.221754   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:30:48.221821   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:30:48.231555   77400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:30:48.456873   77400 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:57.943980   77400 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:30:57.944080   77400 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:30:57.944182   77400 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:30:57.944305   77400 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:30:57.944411   77400 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:30:57.944499   77400 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:30:57.946110   77400 out.go:204]   - Generating certificates and keys ...
	I0422 18:30:57.946192   77400 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:30:57.946262   77400 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:30:57.946385   77400 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:30:57.946464   77400 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:30:57.946559   77400 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:30:57.946683   77400 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:30:57.946772   77400 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:30:57.946835   77400 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:30:57.946902   77400 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:30:57.946963   77400 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:30:57.947000   77400 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:30:57.947054   77400 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:30:57.947116   77400 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:30:57.947201   77400 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:30:57.947283   77400 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:30:57.947383   77400 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:30:57.947458   77400 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:30:57.947589   77400 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:30:57.947662   77400 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:30:57.949092   77400 out.go:204]   - Booting up control plane ...
	I0422 18:30:57.949194   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:30:57.949279   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:30:57.949336   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:30:57.949419   77400 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:30:57.949505   77400 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:30:57.949544   77400 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:30:57.949664   77400 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:30:57.949739   77400 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:30:57.949794   77400 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.588061ms
	I0422 18:30:57.949862   77400 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:30:57.949957   77400 kubeadm.go:309] [api-check] The API server is healthy after 5.510546703s
	I0422 18:30:57.950048   77400 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:57.950152   77400 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:57.950204   77400 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:57.950352   77400 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-407991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:57.950453   77400 kubeadm.go:309] [bootstrap-token] Using token: cwotot.4qmmrydp0nd6w5tq
	I0422 18:30:57.951938   77400 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:57.952040   77400 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:57.952134   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:57.952285   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:57.952410   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:57.952535   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:57.952666   77400 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:57.952799   77400 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:57.952867   77400 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:57.952936   77400 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:57.952952   77400 kubeadm.go:309] 
	I0422 18:30:57.953013   77400 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:57.953019   77400 kubeadm.go:309] 
	I0422 18:30:57.953084   77400 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:57.953090   77400 kubeadm.go:309] 
	I0422 18:30:57.953110   77400 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:57.953199   77400 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:57.953281   77400 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:57.953289   77400 kubeadm.go:309] 
	I0422 18:30:57.953374   77400 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:57.953381   77400 kubeadm.go:309] 
	I0422 18:30:57.953453   77400 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:57.953461   77400 kubeadm.go:309] 
	I0422 18:30:57.953538   77400 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:57.953636   77400 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:57.953719   77400 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:57.953726   77400 kubeadm.go:309] 
	I0422 18:30:57.953813   77400 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:57.953919   77400 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:57.953930   77400 kubeadm.go:309] 
	I0422 18:30:57.954047   77400 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954187   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:57.954222   77400 kubeadm.go:309] 	--control-plane 
	I0422 18:30:57.954232   77400 kubeadm.go:309] 
	I0422 18:30:57.954364   77400 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:57.954374   77400 kubeadm.go:309] 
	I0422 18:30:57.954440   77400 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954553   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:57.954574   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:30:57.954583   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:57.956278   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:57.957592   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:57.970080   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:57.991711   77400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:57.991779   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:57.991780   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-407991 minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=no-preload-407991 minikube.k8s.io/primary=true
	I0422 18:30:58.232025   77400 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:58.232162   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:58.732395   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.232855   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.732187   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.232654   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.732995   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.232856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.732735   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.232474   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.732930   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.232411   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.732457   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.232888   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.732856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.232873   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.733177   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.232682   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.733241   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.232711   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.732922   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.232815   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.732377   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.232576   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.732243   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.232350   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.732764   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.232338   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.357414   77400 kubeadm.go:1107] duration metric: took 13.365692776s to wait for elevateKubeSystemPrivileges
	W0422 18:31:11.357460   77400 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:31:11.357472   77400 kubeadm.go:393] duration metric: took 5m11.48385131s to StartCluster
	I0422 18:31:11.357493   77400 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.357565   77400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:31:11.359176   77400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.359391   77400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:31:11.360948   77400 out.go:177] * Verifying Kubernetes components...
	I0422 18:31:11.359461   77400 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:31:11.359641   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:31:11.362433   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:31:11.362446   77400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-407991"
	I0422 18:31:11.362464   77400 addons.go:69] Setting default-storageclass=true in profile "no-preload-407991"
	I0422 18:31:11.362486   77400 addons.go:69] Setting metrics-server=true in profile "no-preload-407991"
	I0422 18:31:11.362495   77400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-407991"
	I0422 18:31:11.362500   77400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-407991"
	I0422 18:31:11.362515   77400 addons.go:234] Setting addon metrics-server=true in "no-preload-407991"
	W0422 18:31:11.362527   77400 addons.go:243] addon metrics-server should already be in state true
	W0422 18:31:11.362506   77400 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:31:11.362557   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362567   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362929   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362932   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362963   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362971   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362974   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.363144   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.379089   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0422 18:31:11.379582   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.380121   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.380145   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.380496   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.381098   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.381132   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.383229   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 18:31:11.383513   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0422 18:31:11.383642   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.383977   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.384136   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384148   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384552   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.384754   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384770   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384801   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.385103   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.386102   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.386130   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.388554   77400 addons.go:234] Setting addon default-storageclass=true in "no-preload-407991"
	W0422 18:31:11.388569   77400 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:31:11.388589   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.388921   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.388938   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.401669   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0422 18:31:11.402268   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.402852   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.402869   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.403427   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.403610   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.404849   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 18:31:11.405356   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.405588   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.406112   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.406129   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.407696   77400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:31:11.406649   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.409174   77400 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.409195   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:31:11.409214   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.409261   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.411378   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.412836   77400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:31:11.411939   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0422 18:31:11.414011   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:31:11.414027   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:31:11.413155   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.414045   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.414069   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.413487   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.414097   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.413841   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.414686   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.414781   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.414794   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.414871   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.415256   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.415607   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.416288   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.416343   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.417257   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417623   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.417644   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417898   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.418074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.418325   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.418468   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.432218   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0422 18:31:11.432682   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.433096   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.433108   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.433685   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.433887   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.435675   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.435931   77400 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.435952   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:31:11.435969   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.438700   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439107   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.439144   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439237   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.439482   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.439662   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.439833   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.610190   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:31:11.654061   77400 node_ready.go:35] waiting up to 6m0s for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663869   77400 node_ready.go:49] node "no-preload-407991" has status "Ready":"True"
	I0422 18:31:11.663904   77400 node_ready.go:38] duration metric: took 9.806821ms for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663917   77400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:11.673895   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:11.752785   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.770023   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:31:11.770054   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:31:11.799895   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.872083   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:31:11.872113   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:31:11.984597   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:11.984626   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:31:12.059137   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:13.130584   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330646778s)
	I0422 18:31:13.130694   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130718   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.130716   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37789401s)
	I0422 18:31:13.130833   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130847   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131067   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131135   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131159   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131172   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131289   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131304   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131312   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131319   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131327   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.131559   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131574   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131601   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131621   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131621   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.173181   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.173205   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.173478   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.173501   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.279764   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.220585481s)
	I0422 18:31:13.279813   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.279828   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280221   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280241   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280261   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280276   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.280290   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280532   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280570   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280577   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280586   77400 addons.go:470] Verifying addon metrics-server=true in "no-preload-407991"
	I0422 18:31:13.282757   77400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:31:13.284029   77400 addons.go:505] duration metric: took 1.924572004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:31:13.681968   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.682004   77400 pod_ready.go:81] duration metric: took 2.008061657s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.682017   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687240   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.687268   77400 pod_ready.go:81] duration metric: took 5.242949ms for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687281   77400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693047   77400 pod_ready.go:92] pod "etcd-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.693074   77400 pod_ready.go:81] duration metric: took 5.784769ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693086   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705008   77400 pod_ready.go:92] pod "kube-apiserver-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.705028   77400 pod_ready.go:81] duration metric: took 11.934672ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705037   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721814   77400 pod_ready.go:92] pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.721840   77400 pod_ready.go:81] duration metric: took 16.796546ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721855   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079660   77400 pod_ready.go:92] pod "kube-proxy-47g8k" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.079681   77400 pod_ready.go:81] duration metric: took 357.819791ms for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079692   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480000   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.480026   77400 pod_ready.go:81] duration metric: took 400.326493ms for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480037   77400 pod_ready.go:38] duration metric: took 2.816106046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:14.480054   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:31:14.480123   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:31:14.508798   77400 api_server.go:72] duration metric: took 3.149365253s to wait for apiserver process to appear ...
	I0422 18:31:14.508822   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:31:14.508842   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:31:14.523293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:31:14.524410   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:31:14.524439   77400 api_server.go:131] duration metric: took 15.608906ms to wait for apiserver health ...
	I0422 18:31:14.524448   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:31:14.682120   77400 system_pods.go:59] 9 kube-system pods found
	I0422 18:31:14.682152   77400 system_pods.go:61] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:14.682157   77400 system_pods.go:61] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:14.682161   77400 system_pods.go:61] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:14.682164   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:14.682169   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:14.682173   77400 system_pods.go:61] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:14.682178   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:14.682188   77400 system_pods.go:61] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:14.682194   77400 system_pods.go:61] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:14.682205   77400 system_pods.go:74] duration metric: took 157.750249ms to wait for pod list to return data ...
	I0422 18:31:14.682222   77400 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:31:14.878556   77400 default_sa.go:45] found service account: "default"
	I0422 18:31:14.878581   77400 default_sa.go:55] duration metric: took 196.353021ms for default service account to be created ...
	I0422 18:31:14.878590   77400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:31:15.081385   77400 system_pods.go:86] 9 kube-system pods found
	I0422 18:31:15.081415   77400 system_pods.go:89] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:15.081425   77400 system_pods.go:89] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:15.081430   77400 system_pods.go:89] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:15.081434   77400 system_pods.go:89] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:15.081438   77400 system_pods.go:89] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:15.081448   77400 system_pods.go:89] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:15.081452   77400 system_pods.go:89] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:15.081458   77400 system_pods.go:89] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:15.081464   77400 system_pods.go:89] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:15.081476   77400 system_pods.go:126] duration metric: took 202.881032ms to wait for k8s-apps to be running ...
	I0422 18:31:15.081484   77400 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:31:15.081530   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:15.098245   77400 system_svc.go:56] duration metric: took 16.748933ms WaitForService to wait for kubelet
	I0422 18:31:15.098278   77400 kubeadm.go:576] duration metric: took 3.738847086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:31:15.098302   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:31:15.278812   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:31:15.278839   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:31:15.278848   77400 node_conditions.go:105] duration metric: took 180.541553ms to run NodePressure ...
	I0422 18:31:15.278859   77400 start.go:240] waiting for startup goroutines ...
	I0422 18:31:15.278866   77400 start.go:245] waiting for cluster config update ...
	I0422 18:31:15.278875   77400 start.go:254] writing updated cluster config ...
	I0422 18:31:15.279242   77400 ssh_runner.go:195] Run: rm -f paused
	I0422 18:31:15.330788   77400 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:31:15.333274   77400 out.go:177] * Done! kubectl is now configured to use "no-preload-407991" cluster and "default" namespace by default
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.466686609Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1713810671859959279,StartedAt:1713810671928790584,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9b0f8e68-3a4a-4863-85e7-a5bba444bc39/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9b0f8e68-3a4a-4863-85e7-a5bba444bc39/containers/kube-proxy/0fdc1f65,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/
lib/kubelet/pods/9b0f8e68-3a4a-4863-85e7-a5bba444bc39/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/9b0f8e68-3a4a-4863-85e7-a5bba444bc39/volumes/kubernetes.io~projected/kube-api-access-hrkpk,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-47g8k_9b0f8e68-3a4a-4863-85e7-a5bba444bc39/kube-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-c
ollector/interceptors.go:74" id=53c16e2f-1294-4771-9791-cc7b8852f5dc name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.467864213Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,Verbose:false,}" file="otel-collector/interceptors.go:62" id=31a952d9-8d23-482e-a7b4-13a7057f3672 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.468213121Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713810651765344077,StartedAt:1713810651872231293,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.12-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/59d40b5af9fb726dea1f435393c4f523/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/59d40b5af9fb726dea1f435393c4f523/containers/etcd/745c8f2b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-
no-preload-407991_59d40b5af9fb726dea1f435393c4f523/etcd/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=31a952d9-8d23-482e-a7b4-13a7057f3672 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.469375222Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,Verbose:false,}" file="otel-collector/interceptors.go:62" id=03b702c9-8ff3-40e2-a84d-ce1dacb13cf3 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.469700259Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713810651745676934,StartedAt:1713810651867620342,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/77beb6980eb3fa091e5fddc4154c0c31/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/77beb6980eb3fa091e5fddc4154c0c31/containers/kube-scheduler/f185f154,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-407991_77beb6980eb3fa091e5fddc4154c0c31/kube-scheduler/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPe
riod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=03b702c9-8ff3-40e2-a84d-ce1dacb13cf3 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.470791411Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,Verbose:false,}" file="otel-collector/interceptors.go:62" id=6c124659-3dd6-4abc-aa58-5f84f187607a name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.470890411Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713810651663408047,StartedAt:1713810651771276113,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ffe8506f7beabb3b76305583423c6ad0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ffe8506f7beabb3b76305583423c6ad0/containers/kube-apiserver/60526b94,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{Contain
erPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-no-preload-407991_ffe8506f7beabb3b76305583423c6ad0/kube-apiserver/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6c124659-3dd6-4abc-aa58-5f84f187607a name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.471460055Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=bd4bc509-ba65-40ac-9791-c63c4cf9c8f7 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.471603613Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1713810651593378416,StartedAt:1713810651707558195,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.30.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e7e5f7356814fb10b848064696e83862/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e7e5f7356814fb10b848064696e83862/containers/kube-controller-manager/d6fc8809,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE
,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-407991_e7e5f7356814fb10b848064696e83862/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetM
ems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=bd4bc509-ba65-40ac-9791-c63c4cf9c8f7 name=/runtime.v1.RuntimeService/ContainerStatus
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.503285274Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f492434d-b8c7-471d-86d8-9011a20bdbd8 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.503378670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f492434d-b8c7-471d-86d8-9011a20bdbd8 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.504659283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c5f1fc1-7c73-4bd9-9579-02fd51702379 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.505352892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811217504983125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c5f1fc1-7c73-4bd9-9579-02fd51702379 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.506232935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=987b9c42-bcff-4c45-be91-693273fe2429 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.506284302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=987b9c42-bcff-4c45-be91-693273fe2429 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.506462140Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8,PodSandboxId:91405b7dfb5119be8e9ac5a920602aea5af70d0709f7704ff8d5a02dc133eca2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810673666135964,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c704413-c118-4a17-9a18-e13fd3c092f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d19f4df,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b,PodSandboxId:fa4acc3b1d07c6001a52b7f4a1d7ad1bc8c7a946cf485d31b6c704654563291e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672714755250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fclvg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2c4436-1941-4409-8a6b-5f377cb7212c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7ac21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f,PodSandboxId:44f96aef11a5613094bc33ac16065e7b27f7e9ee577dd9753ccc083f4b918f18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672599716328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9tt8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42
140aad-7ab4-4f46-9f24-0fc8717220f4,},Annotations:map[string]string{io.kubernetes.container.hash: aa57921c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,PodSandboxId:dcd0d87c5e1eccc31556bd38d9a68dfad992b8fa94ad8a2c65eda2e4ca824222,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810671697699613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,PodSandboxId:3a402124ae25d858d6345d163c57e1093b6e845c9d00edcbe25356650f5b7ad0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810651665369904,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,PodSandboxId:cbd798c4ad9e8d6f4dc7a0ad023c21512288aa2ecbdb534bbd5393857601528e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810651626695487,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,PodSandboxId:c0685cb27fc984b52e4394fcf8aecd91754cfd7ed90fbf0cec348ea765f5d646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810651567438831,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,PodSandboxId:cd282a65c6c517b7d02da5cf8d60979d5c90714b56f55e27605088be84ce376a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810651528983943,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=987b9c42-bcff-4c45-be91-693273fe2429 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.517305121Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=3bd845bc-add8-4bb1-a41c-dd68bdd833d6 name=/runtime.v1.RuntimeService/Status
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.517379018Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3bd845bc-add8-4bb1-a41c-dd68bdd833d6 name=/runtime.v1.RuntimeService/Status
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.543327529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39283797-22e6-4523-8e38-20575a52d735 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.543448135Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39283797-22e6-4523-8e38-20575a52d735 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.544718635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c6e212df-9a6c-4cb8-8bfb-6ddc5f332724 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.545231483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811217545207781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c6e212df-9a6c-4cb8-8bfb-6ddc5f332724 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.545918312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a50a1f42-19b6-4a2e-95ab-71736a87cf87 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.546009526Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a50a1f42-19b6-4a2e-95ab-71736a87cf87 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:40:17 no-preload-407991 crio[723]: time="2024-04-22 18:40:17.546245554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8,PodSandboxId:91405b7dfb5119be8e9ac5a920602aea5af70d0709f7704ff8d5a02dc133eca2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810673666135964,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c704413-c118-4a17-9a18-e13fd3c092f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d19f4df,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b,PodSandboxId:fa4acc3b1d07c6001a52b7f4a1d7ad1bc8c7a946cf485d31b6c704654563291e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672714755250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fclvg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2c4436-1941-4409-8a6b-5f377cb7212c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7ac21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f,PodSandboxId:44f96aef11a5613094bc33ac16065e7b27f7e9ee577dd9753ccc083f4b918f18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672599716328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9tt8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42
140aad-7ab4-4f46-9f24-0fc8717220f4,},Annotations:map[string]string{io.kubernetes.container.hash: aa57921c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,PodSandboxId:dcd0d87c5e1eccc31556bd38d9a68dfad992b8fa94ad8a2c65eda2e4ca824222,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810671697699613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,PodSandboxId:3a402124ae25d858d6345d163c57e1093b6e845c9d00edcbe25356650f5b7ad0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810651665369904,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,PodSandboxId:cbd798c4ad9e8d6f4dc7a0ad023c21512288aa2ecbdb534bbd5393857601528e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810651626695487,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,PodSandboxId:c0685cb27fc984b52e4394fcf8aecd91754cfd7ed90fbf0cec348ea765f5d646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810651567438831,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,PodSandboxId:cd282a65c6c517b7d02da5cf8d60979d5c90714b56f55e27605088be84ce376a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810651528983943,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a50a1f42-19b6-4a2e-95ab-71736a87cf87 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cdad283db1c7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   91405b7dfb511       storage-provisioner
	4b7f4e1e06ee2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   fa4acc3b1d07c       coredns-7db6d8ff4d-fclvg
	cab159e124934       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   44f96aef11a56       coredns-7db6d8ff4d-9tt8m
	e92f03b86edaa       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   9 minutes ago       Running             kube-proxy                0                   dcd0d87c5e1ec       kube-proxy-47g8k
	22caba79f3789       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   3a402124ae25d       etcd-no-preload-407991
	9ce2e44a81d88       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   9 minutes ago       Running             kube-scheduler            2                   cbd798c4ad9e8       kube-scheduler-no-preload-407991
	b532db71bb33f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   9 minutes ago       Running             kube-apiserver            2                   c0685cb27fc98       kube-apiserver-no-preload-407991
	4e576823a82a0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   9 minutes ago       Running             kube-controller-manager   2                   cd282a65c6c51       kube-controller-manager-no-preload-407991
	
	
	==> coredns [4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-407991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-407991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=no-preload-407991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:30:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-407991
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:40:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:36:25 +0000   Mon, 22 Apr 2024 18:30:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:36:25 +0000   Mon, 22 Apr 2024 18:30:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:36:25 +0000   Mon, 22 Apr 2024 18:30:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:36:25 +0000   Mon, 22 Apr 2024 18:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    no-preload-407991
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d4f172ff26040a2976ef0fc34ce9b7b
	  System UUID:                7d4f172f-f260-40a2-976e-f0fc34ce9b7b
	  Boot ID:                    63c97cfd-5021-47a5-a4b5-dd9d389e4109
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9tt8m                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-fclvg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-no-preload-407991                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-407991             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-407991    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-47g8k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-407991             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-569cc877fc-vrzfj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  9m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m27s)  kubelet          Node no-preload-407991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m27s)  kubelet          Node no-preload-407991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m27s)  kubelet          Node no-preload-407991 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node no-preload-407991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node no-preload-407991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node no-preload-407991 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m6s                   node-controller  Node no-preload-407991 event: Registered Node no-preload-407991 in Controller
	
	
	==> dmesg <==
	[  +0.059276] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040838] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997989] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.840596] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.692614] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.820740] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.059704] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063156] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.201647] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.114840] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.313043] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +17.117183] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.068467] kauditd_printk_skb: 130 callbacks suppressed
	[Apr22 18:26] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +4.590176] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.407316] kauditd_printk_skb: 79 callbacks suppressed
	[Apr22 18:30] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.680055] systemd-fstab-generator[4013]: Ignoring "noauto" option for root device
	[  +4.560553] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.009627] systemd-fstab-generator[4335]: Ignoring "noauto" option for root device
	[Apr22 18:31] systemd-fstab-generator[4545]: Ignoring "noauto" option for root device
	[  +0.122947] kauditd_printk_skb: 14 callbacks suppressed
	[Apr22 18:32] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea] <==
	{"level":"info","ts":"2024-04-22T18:30:52.03541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 switched to configuration voters=(7638128315634442530)"}
	{"level":"info","ts":"2024-04-22T18:30:52.035537Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ae46f2aa0c35daf3","local-member-id":"6a00153c0a3e6122","added-peer-id":"6a00153c0a3e6122","added-peer-peer-urls":["https://192.168.39.164:2380"]}
	{"level":"info","ts":"2024-04-22T18:30:52.062645Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T18:30:52.063223Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6a00153c0a3e6122","initial-advertise-peer-urls":["https://192.168.39.164:2380"],"listen-peer-urls":["https://192.168.39.164:2380"],"advertise-client-urls":["https://192.168.39.164:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.164:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:30:52.065121Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:30:52.065106Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-04-22T18:30:52.065374Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-04-22T18:30:52.784134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T18:30:52.784202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T18:30:52.784238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 received MsgPreVoteResp from 6a00153c0a3e6122 at term 1"}
	{"level":"info","ts":"2024-04-22T18:30:52.78425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.784255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 received MsgVoteResp from 6a00153c0a3e6122 at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.784263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 became leader at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.784274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a00153c0a3e6122 elected leader 6a00153c0a3e6122 at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.789232Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.793336Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6a00153c0a3e6122","local-member-attributes":"{Name:no-preload-407991 ClientURLs:[https://192.168.39.164:2379]}","request-path":"/0/members/6a00153c0a3e6122/attributes","cluster-id":"ae46f2aa0c35daf3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:30:52.79518Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae46f2aa0c35daf3","local-member-id":"6a00153c0a3e6122","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.795281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:30:52.795306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.795377Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.795412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:30:52.797304Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:30:52.797366Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:30:52.797441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:30:52.798836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.164:2379"}
	
	
	==> kernel <==
	 18:40:17 up 14 min,  0 users,  load average: 0.19, 0.30, 0.23
	Linux no-preload-407991 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07] <==
	I0422 18:34:14.020555       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:35:54.433145       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:35:54.433334       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0422 18:35:55.434258       1 handler_proxy.go:93] no RequestInfo found in the context
	W0422 18:35:55.434273       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:35:55.434441       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:35:55.434478       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0422 18:35:55.434379       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:35:55.435788       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:36:55.435716       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:36:55.435815       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:36:55.435829       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:36:55.435917       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:36:55.436007       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:36:55.437256       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:38:55.436596       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:38:55.436708       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:38:55.436718       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:38:55.438015       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:38:55.438198       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:38:55.438229       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0] <==
	I0422 18:34:41.801810       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:35:11.343393       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:35:11.810488       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:35:41.348966       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:35:41.819742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:36:11.354762       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:36:11.827995       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:36:41.360928       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:36:41.837181       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:37:05.336132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="341.868µs"
	E0422 18:37:11.367100       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:37:11.845702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:37:16.333443       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="121.104µs"
	E0422 18:37:41.373314       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:37:41.854512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:38:11.379301       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:38:11.864130       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:38:41.385415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:38:41.875865       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:39:11.395733       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:39:11.885883       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:39:41.402869       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:39:41.894720       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:40:11.409260       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:40:11.905358       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef] <==
	I0422 18:31:12.017636       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:31:12.031254       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.164"]
	I0422 18:31:12.170768       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:31:12.170817       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:31:12.170833       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:31:12.173816       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:31:12.174008       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:31:12.174026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:31:12.179961       1 config.go:192] "Starting service config controller"
	I0422 18:31:12.180118       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:31:12.180215       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:31:12.180245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:31:12.191622       1 config.go:319] "Starting node config controller"
	I0422 18:31:12.191834       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:31:12.281256       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:31:12.281337       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:31:12.291914       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc] <==
	W0422 18:30:55.322933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 18:30:55.323010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 18:30:55.368856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:55.368936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:55.395688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 18:30:55.395752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 18:30:55.416610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 18:30:55.416849       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 18:30:55.531149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:30:55.531251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:30:55.565253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 18:30:55.565344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 18:30:55.584850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 18:30:55.584904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 18:30:55.673277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:55.673333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:55.692307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:55.692359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:55.772335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:30:55.772428       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:30:55.777161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:30:55.777219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 18:30:55.815508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:30:55.815559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0422 18:30:58.283825       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:37:57 no-preload-407991 kubelet[4342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:37:57 no-preload-407991 kubelet[4342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:37:57 no-preload-407991 kubelet[4342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:37:57 no-preload-407991 kubelet[4342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:38:09 no-preload-407991 kubelet[4342]: E0422 18:38:09.316180    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:38:20 no-preload-407991 kubelet[4342]: E0422 18:38:20.315290    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:38:35 no-preload-407991 kubelet[4342]: E0422 18:38:35.321658    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:38:48 no-preload-407991 kubelet[4342]: E0422 18:38:48.315507    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:38:57 no-preload-407991 kubelet[4342]: E0422 18:38:57.366243    4342 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:38:57 no-preload-407991 kubelet[4342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:38:57 no-preload-407991 kubelet[4342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:38:57 no-preload-407991 kubelet[4342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:38:57 no-preload-407991 kubelet[4342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:39:01 no-preload-407991 kubelet[4342]: E0422 18:39:01.314363    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:39:16 no-preload-407991 kubelet[4342]: E0422 18:39:16.315228    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:39:28 no-preload-407991 kubelet[4342]: E0422 18:39:28.315998    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:39:41 no-preload-407991 kubelet[4342]: E0422 18:39:41.316414    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:39:52 no-preload-407991 kubelet[4342]: E0422 18:39:52.315685    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:39:57 no-preload-407991 kubelet[4342]: E0422 18:39:57.365169    4342 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:39:57 no-preload-407991 kubelet[4342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:39:57 no-preload-407991 kubelet[4342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:39:57 no-preload-407991 kubelet[4342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:39:57 no-preload-407991 kubelet[4342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:40:05 no-preload-407991 kubelet[4342]: E0422 18:40:05.315276    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:40:17 no-preload-407991 kubelet[4342]: E0422 18:40:17.318597    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	
	
	==> storage-provisioner [cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8] <==
	I0422 18:31:13.796431       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:31:13.808844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:31:13.809239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:31:13.825321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:31:13.825615       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-407991_792cf973-3284-4091-b176-6db56f70a08f!
	I0422 18:31:13.825890       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1af96034-d1fb-4625-a9b9-c59fe9c2410c", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-407991_792cf973-3284-4091-b176-6db56f70a08f became leader
	I0422 18:31:13.929601       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-407991_792cf973-3284-4091-b176-6db56f70a08f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407991 -n no-preload-407991
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-407991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-vrzfj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-407991 describe pod metrics-server-569cc877fc-vrzfj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-407991 describe pod metrics-server-569cc877fc-vrzfj: exit status 1 (65.034242ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-vrzfj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-407991 describe pod metrics-server-569cc877fc-vrzfj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:34:08.539829   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:34:32.242214   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:34:37.165220   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:34:43.383317   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:34:50.047689   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:34:57.216660   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:35:07.902512   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:36:00.211727   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:36:03.541876   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:36:13.092739   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:36:19.002914   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:36:20.263851   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:37:26.587091   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:37:45.496215   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:38:09.194298   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:38:20.338604   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:39:37.165271   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:39:50.048322   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:39:57.217120   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:40:07.902074   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:41:03.542109   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:41:19.003224   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (259.565654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-367072" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (249.272927ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-367072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-367072 logs -n 25: (1.570346095s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo cat                              | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:21:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:21:45.719482   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:48.791433   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:54.871446   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:57.943441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:04.023441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:07.095417   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:13.175430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:16.247522   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:22.327414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:25.399441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:31.479440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:34.551439   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:40.631451   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:43.703447   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:49.783400   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:52.855484   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:58.935464   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:02.007435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:08.087442   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:11.159452   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:17.239435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:20.311430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:26.391420   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:29.463418   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:35.543443   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:38.615421   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:44.695419   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:47.767475   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:53.847471   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:56.919436   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:02.999404   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:06.071458   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:12.151440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:15.223414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:18.227587   77634 start.go:364] duration metric: took 4m29.759611802s to acquireMachinesLock for "embed-certs-782377"
	I0422 18:24:18.227650   77634 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:18.227661   77634 fix.go:54] fixHost starting: 
	I0422 18:24:18.227979   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:18.228013   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:18.243001   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 18:24:18.243415   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:18.243835   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:24:18.243850   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:18.244219   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:18.244384   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:18.244534   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:24:18.246202   77634 fix.go:112] recreateIfNeeded on embed-certs-782377: state=Stopped err=<nil>
	I0422 18:24:18.246228   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	W0422 18:24:18.246399   77634 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:18.248257   77634 out.go:177] * Restarting existing kvm2 VM for "embed-certs-782377" ...
	I0422 18:24:18.249777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Start
	I0422 18:24:18.249966   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring networks are active...
	I0422 18:24:18.250666   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network default is active
	I0422 18:24:18.251036   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network mk-embed-certs-782377 is active
	I0422 18:24:18.251499   77634 main.go:141] libmachine: (embed-certs-782377) Getting domain xml...
	I0422 18:24:18.252150   77634 main.go:141] libmachine: (embed-certs-782377) Creating domain...
	I0422 18:24:18.225125   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:18.225168   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225565   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:24:18.225593   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225781   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:24:18.227460   77400 machine.go:97] duration metric: took 4m37.410379606s to provisionDockerMachine
	I0422 18:24:18.227495   77400 fix.go:56] duration metric: took 4m37.433636251s for fixHost
	I0422 18:24:18.227499   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 4m37.433656207s
	W0422 18:24:18.227517   77400 start.go:713] error starting host: provision: host is not running
	W0422 18:24:18.227584   77400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0422 18:24:18.227593   77400 start.go:728] Will try again in 5 seconds ...
	I0422 18:24:19.442937   77634 main.go:141] libmachine: (embed-certs-782377) Waiting to get IP...
	I0422 18:24:19.444048   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.444425   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.444484   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.444392   78906 retry.go:31] will retry after 283.008432ms: waiting for machine to come up
	I0422 18:24:19.729076   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.729457   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.729493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.729411   78906 retry.go:31] will retry after 252.047573ms: waiting for machine to come up
	I0422 18:24:19.983011   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.983417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.983442   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.983397   78906 retry.go:31] will retry after 300.528755ms: waiting for machine to come up
	I0422 18:24:20.286039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.286467   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.286500   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.286425   78906 retry.go:31] will retry after 426.555496ms: waiting for machine to come up
	I0422 18:24:20.715191   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.715601   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.715638   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.715525   78906 retry.go:31] will retry after 533.433633ms: waiting for machine to come up
	I0422 18:24:21.250151   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:21.250702   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:21.250732   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:21.250646   78906 retry.go:31] will retry after 854.033547ms: waiting for machine to come up
	I0422 18:24:22.106728   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.107083   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.107109   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.107036   78906 retry.go:31] will retry after 761.233698ms: waiting for machine to come up
	I0422 18:24:22.870007   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.870408   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.870435   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.870364   78906 retry.go:31] will retry after 1.121568589s: waiting for machine to come up
	I0422 18:24:23.229316   77400 start.go:360] acquireMachinesLock for no-preload-407991: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:23.993127   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:23.993600   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:23.993623   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:23.993535   78906 retry.go:31] will retry after 1.525222377s: waiting for machine to come up
	I0422 18:24:25.520203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:25.520584   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:25.520609   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:25.520557   78906 retry.go:31] will retry after 1.618927059s: waiting for machine to come up
	I0422 18:24:27.140862   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:27.141363   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:27.141391   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:27.141315   78906 retry.go:31] will retry after 1.828869827s: waiting for machine to come up
	I0422 18:24:28.972053   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:28.972472   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:28.972508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:28.972438   78906 retry.go:31] will retry after 2.456935091s: waiting for machine to come up
	I0422 18:24:31.430825   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:31.431208   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:31.431266   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:31.431181   78906 retry.go:31] will retry after 3.415431602s: waiting for machine to come up
	I0422 18:24:36.144008   77929 start.go:364] duration metric: took 4m11.537292071s to acquireMachinesLock for "default-k8s-diff-port-856422"
	I0422 18:24:36.144073   77929 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:36.144079   77929 fix.go:54] fixHost starting: 
	I0422 18:24:36.144413   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:36.144450   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:36.161253   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0422 18:24:36.161715   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:36.162147   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:24:36.162166   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:36.162536   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:36.162743   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:36.162914   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:24:36.164366   77929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-856422: state=Stopped err=<nil>
	I0422 18:24:36.164397   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	W0422 18:24:36.164563   77929 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:36.166915   77929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-856422" ...
	I0422 18:24:34.847819   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848316   77634 main.go:141] libmachine: (embed-certs-782377) Found IP for machine: 192.168.50.114
	I0422 18:24:34.848339   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has current primary IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848357   77634 main.go:141] libmachine: (embed-certs-782377) Reserving static IP address...
	I0422 18:24:34.848741   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.848769   77634 main.go:141] libmachine: (embed-certs-782377) DBG | skip adding static IP to network mk-embed-certs-782377 - found existing host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"}
	I0422 18:24:34.848782   77634 main.go:141] libmachine: (embed-certs-782377) Reserved static IP address: 192.168.50.114
	I0422 18:24:34.848801   77634 main.go:141] libmachine: (embed-certs-782377) Waiting for SSH to be available...
	I0422 18:24:34.848808   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Getting to WaitForSSH function...
	I0422 18:24:34.850829   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851167   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.851199   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851332   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH client type: external
	I0422 18:24:34.851352   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa (-rw-------)
	I0422 18:24:34.851383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:34.851402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | About to run SSH command:
	I0422 18:24:34.851417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | exit 0
	I0422 18:24:34.975383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:34.975812   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetConfigRaw
	I0422 18:24:34.976602   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:34.979578   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.979959   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.979992   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.980238   77634 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:24:34.980472   77634 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:34.980497   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:34.980777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:34.983493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.983958   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.983999   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.984175   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:34.984372   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984710   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:34.984894   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:34.985074   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:34.985086   77634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:35.099838   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:35.099873   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100144   77634 buildroot.go:166] provisioning hostname "embed-certs-782377"
	I0422 18:24:35.100169   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100381   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.103203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103589   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.103618   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103754   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.103930   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104116   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104262   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.104446   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.104696   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.104720   77634 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-782377 && echo "embed-certs-782377" | sudo tee /etc/hostname
	I0422 18:24:35.223934   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-782377
	
	I0422 18:24:35.223962   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.227033   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227376   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.227413   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.227779   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.227976   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.228140   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.228334   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.228492   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.228508   77634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-782377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-782377/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-782377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:35.346513   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:35.346545   77634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:35.346561   77634 buildroot.go:174] setting up certificates
	I0422 18:24:35.346571   77634 provision.go:84] configureAuth start
	I0422 18:24:35.346598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.346898   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:35.349820   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350164   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.350192   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350301   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.352921   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353288   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.353314   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353488   77634 provision.go:143] copyHostCerts
	I0422 18:24:35.353543   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:35.353552   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:35.353619   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:35.353717   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:35.353725   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:35.353749   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:35.353801   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:35.353810   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:35.353831   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:35.353894   77634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.embed-certs-782377 san=[127.0.0.1 192.168.50.114 embed-certs-782377 localhost minikube]
	I0422 18:24:35.463676   77634 provision.go:177] copyRemoteCerts
	I0422 18:24:35.463733   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:35.463758   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.466567   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.467039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.467415   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.467605   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.467740   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.549947   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:35.576364   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:24:35.601539   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:35.625959   77634 provision.go:87] duration metric: took 279.37435ms to configureAuth
	I0422 18:24:35.625992   77634 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:35.626171   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:35.626235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.629095   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.629533   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629707   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.629934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630077   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630238   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.630365   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.630546   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.630563   77634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:35.906862   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:35.906892   77634 machine.go:97] duration metric: took 926.403466ms to provisionDockerMachine
	I0422 18:24:35.906905   77634 start.go:293] postStartSetup for "embed-certs-782377" (driver="kvm2")
	I0422 18:24:35.906916   77634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:35.906934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:35.907241   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:35.907277   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.910029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.910438   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910599   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.910782   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.910993   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.911168   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.994189   77634 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:35.998376   77634 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:35.998395   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:35.998468   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:35.998545   77634 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:35.998650   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:36.008268   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:36.034031   77634 start.go:296] duration metric: took 127.110389ms for postStartSetup
	I0422 18:24:36.034081   77634 fix.go:56] duration metric: took 17.806421597s for fixHost
	I0422 18:24:36.034100   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.036964   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037357   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.037380   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.037775   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038051   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.038403   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:36.038568   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:36.038579   77634 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:36.143878   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810276.108619822
	
	I0422 18:24:36.143903   77634 fix.go:216] guest clock: 1713810276.108619822
	I0422 18:24:36.143911   77634 fix.go:229] Guest: 2024-04-22 18:24:36.108619822 +0000 UTC Remote: 2024-04-22 18:24:36.034084746 +0000 UTC m=+287.715620683 (delta=74.535076ms)
	I0422 18:24:36.143936   77634 fix.go:200] guest clock delta is within tolerance: 74.535076ms
	I0422 18:24:36.143941   77634 start.go:83] releasing machines lock for "embed-certs-782377", held for 17.916313877s
	I0422 18:24:36.143966   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.144235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:36.146867   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147228   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.147257   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147431   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.147883   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148066   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148171   77634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:36.148218   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.148377   77634 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:36.148403   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.150838   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151150   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151176   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151268   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151296   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.151466   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.151628   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.151671   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151695   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151747   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.151880   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.152055   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.152209   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.152350   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.229109   77634 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:36.266621   77634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:36.421344   77634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:36.427814   77634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:36.427892   77634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:36.448157   77634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:36.448192   77634 start.go:494] detecting cgroup driver to use...
	I0422 18:24:36.448255   77634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:36.468930   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:36.485780   77634 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:36.485856   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:36.502182   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:36.521179   77634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:36.636244   77634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:36.783292   77634 docker.go:233] disabling docker service ...
	I0422 18:24:36.783366   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:36.803014   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:36.817938   77634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:36.957954   77634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:37.085750   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:37.101054   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:37.123504   77634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:37.123555   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.134422   77634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:37.134491   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.145961   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.157192   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.170117   77634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:37.188656   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.205792   77634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.225739   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.236719   77634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:37.246351   77634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:37.246401   77634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:37.261144   77634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:37.271464   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:37.395686   77634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:37.534079   77634 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:37.534156   77634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:37.539212   77634 start.go:562] Will wait 60s for crictl version
	I0422 18:24:37.539285   77634 ssh_runner.go:195] Run: which crictl
	I0422 18:24:37.543239   77634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:37.581460   77634 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:37.581562   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.611743   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.645811   77634 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:37.647247   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:37.650321   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.650811   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:37.650841   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.651055   77634 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:37.655865   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:37.673617   77634 kubeadm.go:877] updating cluster {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:37.673732   77634 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:37.673785   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:37.718534   77634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:37.718609   77634 ssh_runner.go:195] Run: which lz4
	I0422 18:24:37.723369   77634 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:37.728270   77634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:37.728303   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:36.168344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Start
	I0422 18:24:36.168494   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring networks are active...
	I0422 18:24:36.169419   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network default is active
	I0422 18:24:36.169811   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network mk-default-k8s-diff-port-856422 is active
	I0422 18:24:36.170341   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Getting domain xml...
	I0422 18:24:36.171019   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Creating domain...
	I0422 18:24:37.407148   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting to get IP...
	I0422 18:24:37.408083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408430   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408509   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.408416   79040 retry.go:31] will retry after 267.855158ms: waiting for machine to come up
	I0422 18:24:37.677765   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678134   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678168   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.678084   79040 retry.go:31] will retry after 267.61504ms: waiting for machine to come up
	I0422 18:24:37.947737   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948250   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.948216   79040 retry.go:31] will retry after 351.088664ms: waiting for machine to come up
	I0422 18:24:38.300548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301057   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301090   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.301011   79040 retry.go:31] will retry after 560.164848ms: waiting for machine to come up
	I0422 18:24:38.862557   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863114   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.863075   79040 retry.go:31] will retry after 590.286684ms: waiting for machine to come up
	I0422 18:24:39.454925   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455483   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455510   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:39.455428   79040 retry.go:31] will retry after 870.474888ms: waiting for machine to come up
	I0422 18:24:39.338447   77634 crio.go:462] duration metric: took 1.615205556s to copy over tarball
	I0422 18:24:39.338545   77634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:41.640474   77634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301883484s)
	I0422 18:24:41.640514   77634 crio.go:469] duration metric: took 2.302038123s to extract the tarball
	I0422 18:24:41.640524   77634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:24:41.680325   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:41.724755   77634 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:24:41.724777   77634 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:24:41.724785   77634 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.30.0 crio true true} ...
	I0422 18:24:41.724887   77634 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-782377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:24:41.724964   77634 ssh_runner.go:195] Run: crio config
	I0422 18:24:41.772680   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:41.772704   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:41.772715   77634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:24:41.772733   77634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-782377 NodeName:embed-certs-782377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:24:41.772898   77634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-782377"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:24:41.772964   77634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:24:41.783492   77634 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:24:41.783575   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:24:41.793500   77634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0422 18:24:41.810415   77634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:24:41.827504   77634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0422 18:24:41.845704   77634 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0422 18:24:41.849728   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:41.862798   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:41.998260   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:24:42.018779   77634 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377 for IP: 192.168.50.114
	I0422 18:24:42.018801   77634 certs.go:194] generating shared ca certs ...
	I0422 18:24:42.018820   77634 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:24:42.018977   77634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:24:42.019034   77634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:24:42.019048   77634 certs.go:256] generating profile certs ...
	I0422 18:24:42.019146   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/client.key
	I0422 18:24:42.019218   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key.d804c20e
	I0422 18:24:42.019298   77634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key
	I0422 18:24:42.019455   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:24:42.019493   77634 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:24:42.019509   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:24:42.019539   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:24:42.019571   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:24:42.019606   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:24:42.019665   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:42.020460   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:24:42.065297   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:24:42.098581   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:24:42.139751   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:24:42.169770   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:24:42.199958   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:24:42.229298   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:24:42.254517   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:24:42.279390   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:24:42.303872   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:24:42.329704   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:24:42.355108   77634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:24:42.372684   77634 ssh_runner.go:195] Run: openssl version
	I0422 18:24:42.378631   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:24:42.389709   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394492   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394552   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.400346   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:24:42.411335   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:24:42.422568   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427213   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427278   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.433277   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:24:42.444618   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:24:42.455793   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460681   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460739   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.466785   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:24:42.485401   77634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:24:42.491205   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:24:42.498635   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:24:42.510577   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:24:42.517596   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:24:42.524413   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:24:42.530872   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:24:42.537199   77634 kubeadm.go:391] StartCluster: {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:24:42.537319   77634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:24:42.537379   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.579863   77634 cri.go:89] found id: ""
	I0422 18:24:42.579944   77634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:24:42.590756   77634 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:24:42.590781   77634 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:24:42.590788   77634 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:24:42.590844   77634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:24:42.601517   77634 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:24:42.603120   77634 kubeconfig.go:125] found "embed-certs-782377" server: "https://192.168.50.114:8443"
	I0422 18:24:42.606189   77634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:24:42.616881   77634 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0422 18:24:42.616911   77634 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:24:42.616922   77634 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:24:42.616970   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.656829   77634 cri.go:89] found id: ""
	I0422 18:24:42.656923   77634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:24:42.675575   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:24:42.686408   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:24:42.686431   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:24:42.686484   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:24:42.697303   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:24:42.697391   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:24:42.707693   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:24:42.717836   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:24:42.717932   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:24:42.729952   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.740902   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:24:42.740980   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.751946   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:24:42.761758   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:24:42.761830   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:24:42.772699   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:24:42.783018   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:42.891737   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:40.327325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327782   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:40.327726   79040 retry.go:31] will retry after 926.321969ms: waiting for machine to come up
	I0422 18:24:41.255601   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256117   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:41.256072   79040 retry.go:31] will retry after 928.33371ms: waiting for machine to come up
	I0422 18:24:42.186290   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186798   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186826   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:42.186762   79040 retry.go:31] will retry after 1.708117553s: waiting for machine to come up
	I0422 18:24:43.896236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:43.896597   79040 retry.go:31] will retry after 1.720003793s: waiting for machine to come up
	I0422 18:24:44.055395   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163622709s)
	I0422 18:24:44.055429   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.278840   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.351743   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.460115   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:24:44.460202   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:44.960631   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.460588   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.478048   77634 api_server.go:72] duration metric: took 1.017932232s to wait for apiserver process to appear ...
	I0422 18:24:45.478082   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:24:45.478104   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:45.478702   77634 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0422 18:24:45.978527   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.247298   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:24:48.247334   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:24:48.247351   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.295953   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.296005   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.478899   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.488884   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.488920   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.978472   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.992521   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.992552   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:49.479179   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:49.485588   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:24:49.493015   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:24:49.493055   77634 api_server.go:131] duration metric: took 4.01496465s to wait for apiserver health ...
	I0422 18:24:49.493065   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:49.493074   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:49.494997   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:24:45.618240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618714   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618744   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:45.618673   79040 retry.go:31] will retry after 2.396679945s: waiting for machine to come up
	I0422 18:24:48.016812   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017231   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017258   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:48.017197   79040 retry.go:31] will retry after 2.304959564s: waiting for machine to come up
	I0422 18:24:49.496476   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:24:49.516525   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:24:49.541103   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:24:49.552224   77634 system_pods.go:59] 8 kube-system pods found
	I0422 18:24:49.552263   77634 system_pods.go:61] "coredns-7db6d8ff4d-lxcv2" [137ad3db-8bc5-4b7f-8eb0-12a278eba41c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:24:49.552273   77634 system_pods.go:61] "etcd-embed-certs-782377" [85322e31-1ad6-4239-8086-f2a465a28d8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:24:49.552287   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [e791d7d4-a94d-4cce-a50d-4e569350f210] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:24:49.552307   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [cbcc2e7f-7b3a-435b-97d5-5b69b7e399c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:24:49.552317   77634 system_pods.go:61] "kube-proxy-r4249" [7ffb3b8f-53d8-45df-8426-74f0ffb0d20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 18:24:49.552327   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [9568040b-3eca-403e-b078-d6f2071e70c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:24:49.552335   77634 system_pods.go:61] "metrics-server-569cc877fc-d8s5p" [3bcda1df-02f7-4405-95c7-4d8559a0138c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:24:49.552342   77634 system_pods.go:61] "storage-provisioner" [c196d779-346a-4e3f-b1c3-dde4292df017] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:24:49.552351   77634 system_pods.go:74] duration metric: took 11.221599ms to wait for pod list to return data ...
	I0422 18:24:49.552373   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:24:49.556086   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:24:49.556130   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:24:49.556142   77634 node_conditions.go:105] duration metric: took 3.764067ms to run NodePressure ...
	I0422 18:24:49.556161   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:49.852023   77634 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856866   77634 kubeadm.go:733] kubelet initialised
	I0422 18:24:49.856894   77634 kubeadm.go:734] duration metric: took 4.83996ms waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856904   77634 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:24:49.863808   77634 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.868817   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868840   77634 pod_ready.go:81] duration metric: took 5.001181ms for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.868849   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868855   77634 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.873591   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873612   77634 pod_ready.go:81] duration metric: took 4.750292ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.873621   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873627   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.878471   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878494   77634 pod_ready.go:81] duration metric: took 4.859998ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.878503   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878510   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.945869   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945909   77634 pod_ready.go:81] duration metric: took 67.385628ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.945923   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945932   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345633   77634 pod_ready.go:92] pod "kube-proxy-r4249" in "kube-system" namespace has status "Ready":"True"
	I0422 18:24:50.345655   77634 pod_ready.go:81] duration metric: took 399.713725ms for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345666   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:52.352988   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:50.324396   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324920   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324953   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:50.324894   79040 retry.go:31] will retry after 4.018790507s: waiting for machine to come up
	I0422 18:24:54.347584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348046   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Found IP for machine: 192.168.61.206
	I0422 18:24:54.348081   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has current primary IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348094   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserving static IP address...
	I0422 18:24:54.348535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserved static IP address: 192.168.61.206
	I0422 18:24:54.348560   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for SSH to be available...
	I0422 18:24:54.348584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.348624   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | skip adding static IP to network mk-default-k8s-diff-port-856422 - found existing host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"}
	I0422 18:24:54.348640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Getting to WaitForSSH function...
	I0422 18:24:54.351069   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351570   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.351608   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH client type: external
	I0422 18:24:54.351758   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa (-rw-------)
	I0422 18:24:54.351793   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:54.351810   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | About to run SSH command:
	I0422 18:24:54.351834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | exit 0
	I0422 18:24:54.479277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:54.479674   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetConfigRaw
	I0422 18:24:54.480350   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.483089   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.483498   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483801   77929 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:24:54.484031   77929 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:54.484051   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:54.484272   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.486449   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.486857   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486992   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.487178   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487470   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.487635   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.487825   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.487838   77929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:54.603732   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:54.603759   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.603993   77929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-856422"
	I0422 18:24:54.604017   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.604280   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.606938   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607302   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.607331   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607524   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.607693   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.607856   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.608002   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.608174   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.608381   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.608398   77929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-856422 && echo "default-k8s-diff-port-856422" | sudo tee /etc/hostname
	I0422 18:24:54.734622   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-856422
	
	I0422 18:24:54.734646   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.737804   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738109   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.738141   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.738495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738773   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.738950   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.739157   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.739176   77929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-856422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-856422/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-856422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:54.864646   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:54.864679   77929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:54.864732   77929 buildroot.go:174] setting up certificates
	I0422 18:24:54.864745   77929 provision.go:84] configureAuth start
	I0422 18:24:54.864764   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.865059   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.868205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868626   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.868666   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868868   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.871736   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872118   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.872147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872275   77929 provision.go:143] copyHostCerts
	I0422 18:24:54.872340   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:54.872353   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:54.872424   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:54.872545   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:54.872557   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:54.872598   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:54.872676   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:54.872688   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:54.872718   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:54.872794   77929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-856422 san=[127.0.0.1 192.168.61.206 default-k8s-diff-port-856422 localhost minikube]
	I0422 18:24:55.091765   77929 provision.go:177] copyRemoteCerts
	I0422 18:24:55.091820   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:55.091848   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.094572   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.094939   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.094970   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.095209   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.095501   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.095767   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.095958   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.192243   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:55.223313   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 18:24:55.250149   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:55.279442   77929 provision.go:87] duration metric: took 414.679508ms to configureAuth
	I0422 18:24:55.279474   77929 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:55.280056   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:55.280125   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.282806   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.283237   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283405   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.283636   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283803   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283941   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.284109   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.284276   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.284294   77929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:55.565199   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:55.565225   77929 machine.go:97] duration metric: took 1.081180365s to provisionDockerMachine
	I0422 18:24:55.565239   77929 start.go:293] postStartSetup for "default-k8s-diff-port-856422" (driver="kvm2")
	I0422 18:24:55.565282   77929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:55.565312   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.565649   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:55.565682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.568211   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.568614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568809   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.568994   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.569182   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.569352   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.654461   77929 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:55.658992   77929 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:55.659016   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:55.659091   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:55.659199   77929 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:55.659309   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:55.669183   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:55.694953   77929 start.go:296] duration metric: took 129.698973ms for postStartSetup
	I0422 18:24:55.694998   77929 fix.go:56] duration metric: took 19.550918724s for fixHost
	I0422 18:24:55.695021   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.697596   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.697926   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.697958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.698133   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.698325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698479   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698579   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.698680   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.698897   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.698914   77929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:55.812106   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810295.778892948
	
	I0422 18:24:55.812132   77929 fix.go:216] guest clock: 1713810295.778892948
	I0422 18:24:55.812143   77929 fix.go:229] Guest: 2024-04-22 18:24:55.778892948 +0000 UTC Remote: 2024-04-22 18:24:55.69500303 +0000 UTC m=+271.245786903 (delta=83.889918ms)
	I0422 18:24:55.812168   77929 fix.go:200] guest clock delta is within tolerance: 83.889918ms
	I0422 18:24:55.812176   77929 start.go:83] releasing machines lock for "default-k8s-diff-port-856422", held for 19.668119564s
	I0422 18:24:55.812213   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.812500   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:55.815404   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.815786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.815828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.816036   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816526   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816698   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816781   77929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:55.816823   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.817092   77929 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:55.817116   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.819495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819710   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819931   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.819958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820045   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.820181   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820217   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820362   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820366   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820631   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.820716   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820845   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.904810   77929 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:55.937093   77929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:56.089389   77929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:56.096144   77929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:56.096208   77929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:56.118194   77929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:56.118224   77929 start.go:494] detecting cgroup driver to use...
	I0422 18:24:56.118292   77929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:56.134918   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:56.154107   77929 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:56.154180   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:56.168971   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:56.188793   77929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:56.310223   77929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:56.492316   77929 docker.go:233] disabling docker service ...
	I0422 18:24:56.492430   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:56.515169   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:56.529734   77929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:56.670628   77929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:56.810823   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:56.826785   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:56.847682   77929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:56.847741   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.860499   77929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:56.860576   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.872086   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.883347   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.901596   77929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:56.916912   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.928121   77929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.947335   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.958431   77929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:56.968077   77929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:56.968131   77929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:56.982135   77929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:56.991801   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:57.125635   77929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:57.263889   77929 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:57.263973   77929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:57.269573   77929 start.go:562] Will wait 60s for crictl version
	I0422 18:24:57.269627   77929 ssh_runner.go:195] Run: which crictl
	I0422 18:24:57.273613   77929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:57.314357   77929 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:57.314463   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.345062   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.380868   77929 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:54.353338   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:56.853757   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:57.382284   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:57.385215   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:57.385655   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385889   77929 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:57.390482   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:57.405644   77929 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:57.405766   77929 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:57.405868   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:57.452528   77929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:57.452604   77929 ssh_runner.go:195] Run: which lz4
	I0422 18:24:57.456903   77929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:57.461373   77929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:57.461411   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:59.060426   77929 crio.go:462] duration metric: took 1.603560712s to copy over tarball
	I0422 18:24:59.060532   77929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:58.854112   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:00.856147   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:01.563849   77929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503290941s)
	I0422 18:25:01.563881   77929 crio.go:469] duration metric: took 2.503413712s to extract the tarball
	I0422 18:25:01.563891   77929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:01.603330   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:01.649885   77929 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:25:01.649909   77929 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:25:01.649916   77929 kubeadm.go:928] updating node { 192.168.61.206 8444 v1.30.0 crio true true} ...
	I0422 18:25:01.650053   77929 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-856422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:01.650143   77929 ssh_runner.go:195] Run: crio config
	I0422 18:25:01.698892   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:01.698915   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:01.698929   77929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:01.698948   77929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.206 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-856422 NodeName:default-k8s-diff-port-856422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:01.699075   77929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.206
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-856422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:01.699150   77929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:01.709830   77929 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:01.709903   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:01.720447   77929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0422 18:25:01.738745   77929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:01.756420   77929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0422 18:25:01.775364   77929 ssh_runner.go:195] Run: grep 192.168.61.206	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:01.779476   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:01.792860   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:01.920607   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:01.939637   77929 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422 for IP: 192.168.61.206
	I0422 18:25:01.939658   77929 certs.go:194] generating shared ca certs ...
	I0422 18:25:01.939675   77929 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:01.939858   77929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:01.939911   77929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:01.939922   77929 certs.go:256] generating profile certs ...
	I0422 18:25:01.940026   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/client.key
	I0422 18:25:01.940105   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key.e8400874
	I0422 18:25:01.940170   77929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key
	I0422 18:25:01.940320   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:01.940386   77929 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:01.940400   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:01.940437   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:01.940474   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:01.940506   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:01.940603   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:01.941408   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:01.981392   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:02.020335   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:02.057221   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:02.088571   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 18:25:02.123716   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:02.153926   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:02.183499   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:02.212438   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:02.238650   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:02.265786   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:02.295001   77929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:02.315343   77929 ssh_runner.go:195] Run: openssl version
	I0422 18:25:02.322001   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:02.334785   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340619   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340686   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.348942   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:02.364960   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:02.381460   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386720   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386794   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.392894   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:02.404951   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:02.417334   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423503   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423573   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.430512   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:02.444132   77929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:02.449749   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:02.456667   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:02.463700   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:02.470474   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:02.477324   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:02.483900   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:02.490614   77929 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:02.490719   77929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:02.490768   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.538766   77929 cri.go:89] found id: ""
	I0422 18:25:02.538849   77929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:02.549686   77929 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:02.549711   77929 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:02.549717   77929 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:02.549794   77929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:02.560594   77929 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:02.561584   77929 kubeconfig.go:125] found "default-k8s-diff-port-856422" server: "https://192.168.61.206:8444"
	I0422 18:25:02.563656   77929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:02.575462   77929 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.206
	I0422 18:25:02.575507   77929 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:02.575522   77929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:02.575606   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.628012   77929 cri.go:89] found id: ""
	I0422 18:25:02.628080   77929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:02.645405   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:02.656723   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:02.656751   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:02.656814   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:25:02.667202   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:02.667269   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:02.678303   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:25:02.688600   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:02.688690   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:02.699963   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.710329   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:02.710393   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.721188   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:25:02.731964   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:02.732040   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:02.743541   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:02.755030   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:02.870301   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:03.995375   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125032803s)
	I0422 18:25:03.995447   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.230252   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.302979   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.395038   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:04.395115   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:03.502023   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.353882   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:04.353905   77634 pod_ready.go:81] duration metric: took 14.00823208s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:04.353915   77634 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:06.363356   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:08.363954   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.896176   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.396048   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.440071   77929 api_server.go:72] duration metric: took 1.045032787s to wait for apiserver process to appear ...
	I0422 18:25:05.440103   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:25:05.440148   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.759542   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.759577   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.759592   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.793255   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.793294   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.940652   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.945611   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:08.945646   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:09.440292   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.464743   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.464770   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:09.940470   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.946872   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.946900   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:10.441291   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:10.445834   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:25:10.452788   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:25:10.452814   77929 api_server.go:131] duration metric: took 5.012704724s to wait for apiserver health ...
	I0422 18:25:10.452823   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:10.452828   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:10.454695   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:25:10.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:13.361234   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:10.456234   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:25:10.469460   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:25:10.510297   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:25:10.527988   77929 system_pods.go:59] 8 kube-system pods found
	I0422 18:25:10.528034   77929 system_pods.go:61] "coredns-7db6d8ff4d-w968m" [1372c3d4-cb23-4f33-911b-57876688fcd4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:25:10.528044   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [af6c3f45-494d-469b-95e0-3d0842d07a70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:25:10.528051   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [665925b4-3073-41c2-86c0-12186f079459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:25:10.528057   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [e8661b67-89c5-43a6-b66e-828f637942e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:25:10.528061   77929 system_pods.go:61] "kube-proxy-4xvx2" [0e662ebe-1f6f-48fe-86c7-595b0bfa4bb6] Running
	I0422 18:25:10.528066   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [e6101593-2ee5-4765-b129-33b3ed7d4c98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:25:10.528075   77929 system_pods.go:61] "metrics-server-569cc877fc-l5qqw" [85eab808-f1f0-4fbc-9c54-1ae307226243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:25:10.528079   77929 system_pods.go:61] "storage-provisioner" [ba8465de-babc-4496-809f-68f6ec917ce8] Running
	I0422 18:25:10.528095   77929 system_pods.go:74] duration metric: took 17.768241ms to wait for pod list to return data ...
	I0422 18:25:10.528104   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:25:10.539169   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:25:10.539202   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:25:10.539214   77929 node_conditions.go:105] duration metric: took 11.105847ms to run NodePressure ...
	I0422 18:25:10.539237   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:10.808687   77929 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:25:10.815993   77929 kubeadm.go:733] kubelet initialised
	I0422 18:25:10.816025   77929 kubeadm.go:734] duration metric: took 7.302574ms waiting for restarted kubelet to initialise ...
	I0422 18:25:10.816037   77929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:25:10.824257   77929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:12.837255   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:16.704684   77400 start.go:364] duration metric: took 53.475319353s to acquireMachinesLock for "no-preload-407991"
	I0422 18:25:16.704741   77400 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:25:16.704752   77400 fix.go:54] fixHost starting: 
	I0422 18:25:16.705132   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:25:16.705166   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:25:16.721711   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0422 18:25:16.722127   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:25:16.722671   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:25:16.722693   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:25:16.723022   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:25:16.723220   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:16.723426   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:25:16.725197   77400 fix.go:112] recreateIfNeeded on no-preload-407991: state=Stopped err=<nil>
	I0422 18:25:16.725231   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	W0422 18:25:16.725430   77400 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:25:16.727275   77400 out.go:177] * Restarting existing kvm2 VM for "no-preload-407991" ...
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:15.362667   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:17.861622   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:14.831683   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:14.831706   77929 pod_ready.go:81] duration metric: took 4.007420508s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:14.831715   77929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343025   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:16.343056   77929 pod_ready.go:81] duration metric: took 1.511333532s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343070   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351244   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:17.351267   77929 pod_ready.go:81] duration metric: took 1.008189798s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351280   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:19.365025   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:16.728745   77400 main.go:141] libmachine: (no-preload-407991) Calling .Start
	I0422 18:25:16.728946   77400 main.go:141] libmachine: (no-preload-407991) Ensuring networks are active...
	I0422 18:25:16.729604   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network default is active
	I0422 18:25:16.729979   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network mk-no-preload-407991 is active
	I0422 18:25:16.730458   77400 main.go:141] libmachine: (no-preload-407991) Getting domain xml...
	I0422 18:25:16.731314   77400 main.go:141] libmachine: (no-preload-407991) Creating domain...
	I0422 18:25:18.079763   77400 main.go:141] libmachine: (no-preload-407991) Waiting to get IP...
	I0422 18:25:18.080862   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.081371   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.081401   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.081340   79353 retry.go:31] will retry after 226.494122ms: waiting for machine to come up
	I0422 18:25:18.309499   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.309914   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.310019   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.309900   79353 retry.go:31] will retry after 375.374338ms: waiting for machine to come up
	I0422 18:25:18.686507   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.687064   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.687093   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.687018   79353 retry.go:31] will retry after 341.714326ms: waiting for machine to come up
	I0422 18:25:19.030772   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.031261   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.031290   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.031229   79353 retry.go:31] will retry after 388.101939ms: waiting for machine to come up
	I0422 18:25:19.420994   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.421478   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.421500   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.421397   79353 retry.go:31] will retry after 732.485222ms: waiting for machine to come up
	I0422 18:25:20.155887   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:20.156717   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:20.156750   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:20.156665   79353 retry.go:31] will retry after 950.207106ms: waiting for machine to come up
	I0422 18:25:19.878966   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.364111   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:21.859384   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.362519   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.362552   77929 pod_ready.go:81] duration metric: took 5.011264858s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.362566   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371087   77929 pod_ready.go:92] pod "kube-proxy-4xvx2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.371112   77929 pod_ready.go:81] duration metric: took 8.534129ms for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371142   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376156   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.376183   77929 pod_ready.go:81] duration metric: took 5.03143ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376196   77929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:24.385435   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:21.108444   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:21.109056   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:21.109082   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:21.109004   79353 retry.go:31] will retry after 958.250136ms: waiting for machine to come up
	I0422 18:25:22.069541   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:22.070120   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:22.070144   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:22.070036   79353 retry.go:31] will retry after 989.607679ms: waiting for machine to come up
	I0422 18:25:23.061351   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:23.061877   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:23.061908   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:23.061823   79353 retry.go:31] will retry after 1.451989455s: waiting for machine to come up
	I0422 18:25:24.515233   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:24.515730   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:24.515755   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:24.515686   79353 retry.go:31] will retry after 2.303903602s: waiting for machine to come up
	I0422 18:25:24.365508   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.861066   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.389132   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:28.883625   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:26.821539   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:26.822006   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:26.822033   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:26.821950   79353 retry.go:31] will retry after 1.870697225s: waiting for machine to come up
	I0422 18:25:28.695072   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:28.695420   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:28.695466   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:28.695386   79353 retry.go:31] will retry after 2.327485176s: waiting for machine to come up
	I0422 18:25:28.861976   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:31.361339   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.883801   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:33.389422   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.024382   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:31.024817   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:31.024845   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:31.024786   79353 retry.go:31] will retry after 2.767538103s: waiting for machine to come up
	I0422 18:25:33.794390   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:33.794834   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:33.794872   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:33.794808   79353 retry.go:31] will retry after 5.661373675s: waiting for machine to come up
	I0422 18:25:33.860276   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.861770   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:38.361316   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.883098   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:37.883749   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.457864   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458407   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has current primary IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458447   77400 main.go:141] libmachine: (no-preload-407991) Found IP for machine: 192.168.39.164
	I0422 18:25:39.458492   77400 main.go:141] libmachine: (no-preload-407991) Reserving static IP address...
	I0422 18:25:39.458954   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.458980   77400 main.go:141] libmachine: (no-preload-407991) DBG | skip adding static IP to network mk-no-preload-407991 - found existing host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"}
	I0422 18:25:39.458992   77400 main.go:141] libmachine: (no-preload-407991) Reserved static IP address: 192.168.39.164
	I0422 18:25:39.459012   77400 main.go:141] libmachine: (no-preload-407991) Waiting for SSH to be available...
	I0422 18:25:39.459027   77400 main.go:141] libmachine: (no-preload-407991) DBG | Getting to WaitForSSH function...
	I0422 18:25:39.461404   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461715   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.461746   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461875   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH client type: external
	I0422 18:25:39.461906   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa (-rw-------)
	I0422 18:25:39.461956   77400 main.go:141] libmachine: (no-preload-407991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:39.461974   77400 main.go:141] libmachine: (no-preload-407991) DBG | About to run SSH command:
	I0422 18:25:39.461992   77400 main.go:141] libmachine: (no-preload-407991) DBG | exit 0
	I0422 18:25:39.591446   77400 main.go:141] libmachine: (no-preload-407991) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:39.591795   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetConfigRaw
	I0422 18:25:39.592473   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.594928   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595379   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.595414   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595632   77400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:25:39.595890   77400 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:39.595914   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:39.596103   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.598532   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.598899   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.598929   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.599071   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.599270   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599450   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599592   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.599728   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.599927   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.599942   77400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:39.712043   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:39.712081   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712336   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:25:39.712363   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712548   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.715474   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.715936   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.715960   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.716089   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.716265   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716396   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716530   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.716656   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.716860   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.716874   77400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407991 && echo "no-preload-407991" | sudo tee /etc/hostname
	I0422 18:25:39.845921   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407991
	
	I0422 18:25:39.845959   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.848790   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849093   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.849121   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849288   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.849495   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849638   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849817   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.850014   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.850183   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.850200   77400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407991/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:39.977389   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:39.977427   77400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:39.977447   77400 buildroot.go:174] setting up certificates
	I0422 18:25:39.977456   77400 provision.go:84] configureAuth start
	I0422 18:25:39.977468   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.977754   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.980800   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981266   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.981305   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981458   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.984031   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984478   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.984510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984654   77400 provision.go:143] copyHostCerts
	I0422 18:25:39.984713   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:39.984725   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:39.984788   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:39.984907   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:39.984918   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:39.984952   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:39.985038   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:39.985048   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:39.985076   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:39.985158   77400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.no-preload-407991 san=[127.0.0.1 192.168.39.164 localhost minikube no-preload-407991]
	I0422 18:25:40.224235   77400 provision.go:177] copyRemoteCerts
	I0422 18:25:40.224306   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:40.224352   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.227355   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.227814   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.227842   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.228035   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.228232   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.228392   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.228560   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.318916   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:40.346168   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:40.371490   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:25:40.396866   77400 provision.go:87] duration metric: took 419.381117ms to configureAuth
	I0422 18:25:40.396899   77400 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:40.397067   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:25:40.397130   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.399642   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400060   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.400095   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400269   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.400466   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400652   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400832   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.401018   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.401176   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.401191   77400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:40.698107   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:40.698140   77400 machine.go:97] duration metric: took 1.102235221s to provisionDockerMachine
	I0422 18:25:40.698153   77400 start.go:293] postStartSetup for "no-preload-407991" (driver="kvm2")
	I0422 18:25:40.698171   77400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:40.698187   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.698497   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:40.698532   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.701545   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.701933   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.701964   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.702070   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.702295   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.702492   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.702727   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.800538   77400 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:40.805027   77400 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:40.805060   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:40.805133   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:40.805216   77400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:40.805304   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:40.816872   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:40.843857   77400 start.go:296] duration metric: took 145.69044ms for postStartSetup
	I0422 18:25:40.843896   77400 fix.go:56] duration metric: took 24.13914409s for fixHost
	I0422 18:25:40.843914   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.846770   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847148   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.847184   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847391   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.847605   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847778   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847966   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.848199   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.848382   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.848396   77400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:40.964440   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810340.939149386
	
	I0422 18:25:40.964473   77400 fix.go:216] guest clock: 1713810340.939149386
	I0422 18:25:40.964483   77400 fix.go:229] Guest: 2024-04-22 18:25:40.939149386 +0000 UTC Remote: 2024-04-22 18:25:40.843899302 +0000 UTC m=+360.205454093 (delta=95.250084ms)
	I0422 18:25:40.964508   77400 fix.go:200] guest clock delta is within tolerance: 95.250084ms
	I0422 18:25:40.964513   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 24.259798286s
	I0422 18:25:40.964535   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.964813   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:40.967510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.967906   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.967932   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.968087   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968610   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968782   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968866   77400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:40.968910   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.969047   77400 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:40.969074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.971818   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972039   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972190   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972203   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972394   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972565   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972580   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972594   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972733   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972791   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.972875   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972948   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.973062   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.973206   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:41.092004   77400 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:41.098574   77400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:41.242800   77400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:41.250454   77400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:41.250521   77400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:41.267380   77400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:41.267408   77400 start.go:494] detecting cgroup driver to use...
	I0422 18:25:41.267478   77400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:41.284742   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:41.299527   77400 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:41.299596   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:41.314189   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:41.329444   77400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:41.456719   77400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:41.628305   77400 docker.go:233] disabling docker service ...
	I0422 18:25:41.628376   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:41.643226   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:41.657578   77400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:41.780449   77400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:41.898823   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:41.913578   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:41.933621   77400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:25:41.933679   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.944309   77400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:41.944382   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.955308   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.966445   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.977509   77400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:41.989479   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.001915   77400 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.020554   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.033225   77400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:42.044177   77400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:42.044231   77400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:42.060403   77400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:42.071760   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:42.213747   77400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:42.361818   77400 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:42.361911   77400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:42.367211   77400 start.go:562] Will wait 60s for crictl version
	I0422 18:25:42.367265   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.371042   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:42.408686   77400 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:42.408773   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.438447   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.469117   77400 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:25:40.862849   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.361826   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:39.884361   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:41.885199   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.885865   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.470665   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:42.473467   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.473845   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:42.473871   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.474121   77400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:42.478401   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:42.491034   77400 kubeadm.go:877] updating cluster {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:42.491163   77400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:25:42.491203   77400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:42.530418   77400 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:25:42.530443   77400 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.530585   77400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.530641   77400 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0422 18:25:42.530601   77400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.530609   77400 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.530622   77400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.530626   77400 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532108   77400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.532136   77400 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0422 18:25:42.532111   77400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.532113   77400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.532175   77400 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532197   77400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.532223   77400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.532506   77400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.735366   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.750777   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0422 18:25:42.758260   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.759633   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.763447   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.765416   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.803799   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.832904   77400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0422 18:25:42.832959   77400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.833021   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981471   77400 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0422 18:25:42.981528   77400 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.981553   77400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0422 18:25:42.981584   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981592   77400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.981635   77400 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0422 18:25:42.981663   77400 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.981687   77400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0422 18:25:42.981699   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981642   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981716   77400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.981770   77400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0422 18:25:42.981776   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981788   77400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.981820   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981846   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:43.021364   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0422 18:25:43.021416   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:43.021455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.021460   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:43.021529   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:43.021534   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:43.021585   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:43.130300   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0422 18:25:43.130373   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0422 18:25:43.130408   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:43.130425   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0422 18:25:43.130455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:43.130514   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:43.134769   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0422 18:25:43.134785   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0422 18:25:43.134797   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134839   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134853   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:43.134882   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0422 18:25:43.134959   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:43.142273   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0422 18:25:43.142486   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0422 18:25:43.142837   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0422 18:25:43.840108   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210614   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.075740127s)
	I0422 18:25:45.210650   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0422 18:25:45.210655   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.075789371s)
	I0422 18:25:45.210676   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0422 18:25:45.210693   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.075715404s)
	I0422 18:25:45.210699   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210706   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0422 18:25:45.210748   77400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.370610047s)
	I0422 18:25:45.210785   77400 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0422 18:25:45.210750   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210842   77400 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210969   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:45.363082   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:47.861802   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:46.383938   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:48.385209   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.203063   77400 ssh_runner.go:235] Completed: which crictl: (2.992066474s)
	I0422 18:25:48.203106   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.992228832s)
	I0422 18:25:48.203143   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0422 18:25:48.203159   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:48.203171   77400 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:48.203210   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:49.863963   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:52.370507   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.883608   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:53.386229   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.419429   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.216195193s)
	I0422 18:25:52.419462   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0422 18:25:52.419474   77400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.216288559s)
	I0422 18:25:52.419488   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419513   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0422 18:25:52.419537   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419581   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:52.424638   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0422 18:25:53.873720   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.454157304s)
	I0422 18:25:53.873750   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0422 18:25:53.873780   77400 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:53.873825   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:54.860810   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:56.864272   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.388103   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:57.887970   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.955181   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.081308071s)
	I0422 18:25:55.955210   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0422 18:25:55.955236   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:55.955300   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:58.218734   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.263410883s)
	I0422 18:25:58.218762   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0422 18:25:58.218792   77400 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:58.218843   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:59.071398   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0422 18:25:59.071443   77400 cache_images.go:123] Successfully loaded all cached images
	I0422 18:25:59.071450   77400 cache_images.go:92] duration metric: took 16.54097573s to LoadCachedImages
	I0422 18:25:59.071463   77400 kubeadm.go:928] updating node { 192.168.39.164 8443 v1.30.0 crio true true} ...
	I0422 18:25:59.071610   77400 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-407991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:59.071698   77400 ssh_runner.go:195] Run: crio config
	I0422 18:25:59.125757   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:25:59.125783   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:59.125800   77400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:59.125832   77400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-407991 NodeName:no-preload-407991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:59.126001   77400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-407991"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:59.126073   77400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:59.137254   77400 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:59.137320   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:59.146983   77400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0422 18:25:59.165207   77400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:59.182898   77400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0422 18:25:59.201735   77400 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:59.206108   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:59.219642   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:59.336565   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:59.356844   77400 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991 for IP: 192.168.39.164
	I0422 18:25:59.356873   77400 certs.go:194] generating shared ca certs ...
	I0422 18:25:59.356893   77400 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:59.357058   77400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:59.357121   77400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:59.357133   77400 certs.go:256] generating profile certs ...
	I0422 18:25:59.357209   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.key
	I0422 18:25:59.357329   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key.6aa1268b
	I0422 18:25:59.357413   77400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key
	I0422 18:25:59.357574   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:59.357616   77400 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:59.357631   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:59.357672   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:59.357707   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:59.357745   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:59.357823   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:59.358765   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:59.395982   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:59.430445   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:59.465415   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:59.502678   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:25:59.538225   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:25:59.570635   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:59.596096   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:59.622051   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:59.647372   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:59.673650   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:59.699515   77400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:59.717253   77400 ssh_runner.go:195] Run: openssl version
	I0422 18:25:59.723704   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:59.735265   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740264   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740319   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.746445   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:59.757879   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:59.769243   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774505   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774562   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.780572   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:59.793472   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:59.805187   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810148   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810191   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.816350   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:59.828208   77400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:59.832799   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:59.838952   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:59.845145   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:59.851309   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:59.857643   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:59.864892   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:59.873625   77400 kubeadm.go:391] StartCluster: {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:59.873749   77400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:59.873826   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.913578   77400 cri.go:89] found id: ""
	I0422 18:25:59.913656   77400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:59.925105   77400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:59.925131   77400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:59.925138   77400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:59.925192   77400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:59.935942   77400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:59.937363   77400 kubeconfig.go:125] found "no-preload-407991" server: "https://192.168.39.164:8443"
	I0422 18:25:59.939672   77400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:59.949774   77400 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.164
	I0422 18:25:59.949810   77400 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:59.949841   77400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:59.949896   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.989385   77400 cri.go:89] found id: ""
	I0422 18:25:59.989443   77400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:26:00.005985   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:26:00.016873   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:26:00.016897   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:26:00.016953   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:26:00.027119   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:26:00.027205   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:26:00.038360   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:26:00.048176   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:26:00.048246   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:26:00.058861   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.068955   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:26:00.069018   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.079147   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:26:00.089400   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:26:00.089477   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:26:00.100245   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:26:00.111040   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:00.224436   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:59.362215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:01.860196   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.388433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:02.883211   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.838456   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.057201   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.143346   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.294896   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:26:01.295031   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.795945   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.296085   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.324434   77400 api_server.go:72] duration metric: took 1.029539423s to wait for apiserver process to appear ...
	I0422 18:26:02.324467   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:26:02.324490   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.784948   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:26:04.784984   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:26:04.784997   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.844019   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.844064   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:04.844084   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.848805   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.848838   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.325458   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.332351   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.332410   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.824785   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.830293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.830318   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:06.325380   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:06.332804   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:26:06.344083   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:26:06.344110   77400 api_server.go:131] duration metric: took 4.019636154s to wait for apiserver health ...
	I0422 18:26:06.344118   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:26:06.344123   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:26:06.345875   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:26:03.863020   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:06.360428   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:04.884648   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:07.382356   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:09.388391   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.347812   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:26:06.361087   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:26:06.385654   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:26:06.398331   77400 system_pods.go:59] 8 kube-system pods found
	I0422 18:26:06.398372   77400 system_pods.go:61] "coredns-7db6d8ff4d-2p2sr" [3f42ce46-e76d-4bc8-9dd5-463a08948e4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:26:06.398384   77400 system_pods.go:61] "etcd-no-preload-407991" [96ae7feb-802f-44a8-81fc-5ea5de12e73b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:26:06.398396   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [28010e33-49a1-4c6b-90f9-939ede3ed97e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:26:06.398404   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [1e7db029-2196-499f-bc88-d780d065f80c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:26:06.398415   77400 system_pods.go:61] "kube-proxy-767q4" [1c6d01b0-caf0-4d52-8da8-caad7b158012] Running
	I0422 18:26:06.398426   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [3ef8d145-d90e-455d-98fe-de9e6080a178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:26:06.398433   77400 system_pods.go:61] "metrics-server-569cc877fc-jmjhm" [d831b01b-af2e-4c7f-944c-e768d724ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:26:06.398439   77400 system_pods.go:61] "storage-provisioner" [db8196df-a394-4e10-9db7-c10414833af3] Running
	I0422 18:26:06.398447   77400 system_pods.go:74] duration metric: took 12.770066ms to wait for pod list to return data ...
	I0422 18:26:06.398455   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:26:06.402125   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:26:06.402158   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:26:06.402170   77400 node_conditions.go:105] duration metric: took 3.709194ms to run NodePressure ...
	I0422 18:26:06.402195   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:06.676133   77400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680247   77400 kubeadm.go:733] kubelet initialised
	I0422 18:26:06.680269   77400 kubeadm.go:734] duration metric: took 4.114413ms waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680276   77400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:26:06.687275   77400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.693967   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.693986   77400 pod_ready.go:81] duration metric: took 6.687466ms for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.694004   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.694012   77400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.698539   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698562   77400 pod_ready.go:81] duration metric: took 4.539271ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.698571   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698578   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.703382   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703407   77400 pod_ready.go:81] duration metric: took 4.822601ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.703418   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703425   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.789413   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789449   77400 pod_ready.go:81] duration metric: took 86.014056ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.789459   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789465   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189544   77400 pod_ready.go:92] pod "kube-proxy-767q4" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:07.189572   77400 pod_ready.go:81] duration metric: took 400.096716ms for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189585   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:09.201757   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:08.861714   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.359820   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.362303   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.883726   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:14.382966   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.697196   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.697458   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.861378   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:17.861808   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:16.385523   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:18.883000   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.697079   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:15.697104   77400 pod_ready.go:81] duration metric: took 8.507511233s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:15.697116   77400 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:17.704095   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.204276   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.360946   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:22.861202   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.883107   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:23.383119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.204697   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.703926   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.861797   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.361089   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.384161   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.386172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:27.204128   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.207057   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.364913   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.883085   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.883536   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.883927   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:31.704123   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:34.206443   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.863261   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.360525   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.361132   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.385507   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.882649   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:36.704473   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:39.204839   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.863837   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.362000   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.887004   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.384351   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:41.704418   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.704878   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.362073   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.860279   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.385285   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.882420   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:45.706474   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:48.205681   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.860748   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.360586   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.884553   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:51.885527   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.387502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:50.703624   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.704794   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.204187   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.861314   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:57.360430   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:56.882974   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:58.884770   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:57.703613   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:00.205449   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:59.861376   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.862613   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.386313   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:03.883728   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:02.205610   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.704158   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.359865   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:06.359968   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.360935   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:05.884072   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.386493   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:07.203802   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:09.204754   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.862854   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.361215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.883779   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:12.883989   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.205525   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.706785   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:15.861009   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.861214   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.884371   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.384911   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:16.204000   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:18.704622   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.864836   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:22.360984   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.883393   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:21.885743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.384476   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.704821   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:23.203548   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:25.204601   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.361665   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.361708   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.388979   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.882505   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:27.714190   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.204420   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.861728   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:31.360750   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.360969   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.883051   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.386563   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:32.704424   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:34.705215   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.861184   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.361048   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.883602   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.383601   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:37.203082   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.204563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.860739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.861544   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.385271   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.389273   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:41.705615   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.203947   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.361656   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:47.860929   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.882684   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:46.886119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:49.382017   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:46.703083   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:48.705413   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:50.361465   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:52.364509   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.384561   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.882657   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.203623   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.205834   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.863919   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.360796   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:56.383743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:58.383783   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.703110   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.704061   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.704421   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.361229   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:01.362273   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.385750   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:02.886349   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:02.203877   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:04.204896   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:03.860506   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.860713   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.384180   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.884714   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:06.206560   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.703406   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.364348   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.861760   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.361127   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.392330   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:12.883081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:10.705202   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.203760   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.205898   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.361423   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:17.363341   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:14.883352   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.886433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.382478   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:17.703917   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.704958   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.861537   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.862949   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.882158   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:23.885280   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.204356   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.703765   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.361251   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:26.862825   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.886291   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:28.382905   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.705590   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.205641   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.361439   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:31.361534   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:30.383502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.887006   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:31.704145   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.704841   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.860769   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.860833   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.861583   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.382871   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.383164   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:36.204210   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:38.205076   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:40.360574   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.862692   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:39.882407   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.882939   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:43.883102   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:40.703806   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.704426   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.707600   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.361890   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.860686   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.883300   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.884863   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:47.207153   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.704440   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.863696   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.360739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.384172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.386468   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.704563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.206032   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.362075   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.861096   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.883667   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:57.386081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:56.709043   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.203924   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:58.861932   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.360398   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.360867   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.882692   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.883267   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.383872   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:01.204827   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.204916   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.355065   77634 pod_ready.go:81] duration metric: took 4m0.0011361s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:04.355113   77634 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:04.355148   77634 pod_ready.go:38] duration metric: took 4m14.498231749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:04.355180   77634 kubeadm.go:591] duration metric: took 4m21.764385121s to restartPrimaryControlPlane
	W0422 18:29:04.355236   77634 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:04.355261   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:06.385395   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:08.883604   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:05.703491   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:07.704534   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.705814   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:11.383569   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:13.883521   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:11.706608   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:14.204779   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:16.384284   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.884060   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.704652   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.704899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:21.391438   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:22.376750   77929 pod_ready.go:81] duration metric: took 4m0.000534542s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:22.376787   77929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:22.376811   77929 pod_ready.go:38] duration metric: took 4m11.560762914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:22.376844   77929 kubeadm.go:591] duration metric: took 4m19.827120959s to restartPrimaryControlPlane
	W0422 18:29:22.376929   77929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:22.376953   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.203699   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:23.204704   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:25.206832   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:27.706178   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:30.203899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:29:32.204107   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:34.707228   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:36.541281   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.185998541s)
	I0422 18:29:36.541367   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:36.558729   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:36.569635   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:36.579901   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:36.579919   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:36.579959   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:36.589540   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:36.589602   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:36.600704   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:36.610945   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:36.611012   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:36.621316   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.631251   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:36.631305   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.641661   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:36.650970   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:36.651049   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:36.661012   77634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:36.717676   77634 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:36.717771   77634 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:36.861264   77634 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:36.861404   77634 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:36.861534   77634 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:37.083032   77634 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:37.084958   77634 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:37.085069   77634 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:37.085179   77634 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:37.085296   77634 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:37.085387   77634 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:37.085505   77634 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:37.085579   77634 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:37.085665   77634 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:37.085748   77634 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:37.085869   77634 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:37.085985   77634 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:37.086037   77634 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:37.086114   77634 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:37.337747   77634 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:37.538036   77634 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:37.630303   77634 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:37.755713   77634 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:38.081451   77634 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:38.082265   77634 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:38.084958   77634 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:38.086755   77634 out.go:204]   - Booting up control plane ...
	I0422 18:29:38.086893   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:38.087023   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:38.089714   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:38.108313   77634 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:38.108786   77634 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:38.108849   77634 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:38.241537   77634 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:38.241681   77634 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:37.203550   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:39.205619   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:38.743798   77634 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.847818ms
	I0422 18:29:38.743910   77634 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:44.245440   77634 kubeadm.go:309] [api-check] The API server is healthy after 5.501913498s
	I0422 18:29:44.265283   77634 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:29:44.280940   77634 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:29:44.318688   77634 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:29:44.318990   77634 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-782377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:29:44.332201   77634 kubeadm.go:309] [bootstrap-token] Using token: o52gh5.f6sjmkidroy1sl61
	I0422 18:29:44.333546   77634 out.go:204]   - Configuring RBAC rules ...
	I0422 18:29:44.333670   77634 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:29:44.342847   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:29:44.350983   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:29:44.354214   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:29:44.361351   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:29:44.365170   77634 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:29:44.654414   77634 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:29:45.170247   77634 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:29:45.654714   77634 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:29:45.654744   77634 kubeadm.go:309] 
	I0422 18:29:45.654847   77634 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:29:45.654871   77634 kubeadm.go:309] 
	I0422 18:29:45.654984   77634 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:29:45.654996   77634 kubeadm.go:309] 
	I0422 18:29:45.655028   77634 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:29:45.655108   77634 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:29:45.655201   77634 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:29:45.655211   77634 kubeadm.go:309] 
	I0422 18:29:45.655308   77634 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:29:45.655317   77634 kubeadm.go:309] 
	I0422 18:29:45.655395   77634 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:29:45.655414   77634 kubeadm.go:309] 
	I0422 18:29:45.655486   77634 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:29:45.655597   77634 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:29:45.655700   77634 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:29:45.655714   77634 kubeadm.go:309] 
	I0422 18:29:45.655824   77634 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:29:45.655951   77634 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:29:45.655963   77634 kubeadm.go:309] 
	I0422 18:29:45.656067   77634 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656226   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:29:45.656258   77634 kubeadm.go:309] 	--control-plane 
	I0422 18:29:45.656265   77634 kubeadm.go:309] 
	I0422 18:29:45.656383   77634 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:29:45.656394   77634 kubeadm.go:309] 
	I0422 18:29:45.656513   77634 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656602   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:29:45.657124   77634 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:29:45.657152   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:29:45.657168   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:29:45.658873   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:29:41.705450   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:44.205661   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:45.660184   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:29:45.671834   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:29:45.693947   77634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:29:45.694034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:45.694054   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-782377 minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=embed-certs-782377 minikube.k8s.io/primary=true
	I0422 18:29:45.901437   77634 ops.go:34] apiserver oom_adj: -16
	I0422 18:29:45.901443   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.402050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.902222   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.402527   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.901535   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.206598   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.703899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.401738   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:48.902497   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.402046   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.901756   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.402023   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.901600   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.401905   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.901739   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.401859   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.902155   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.661872   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.28489375s)
	I0422 18:29:54.661952   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:54.679790   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:54.689947   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:54.700173   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:54.700191   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:54.700230   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:29:54.711462   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:54.711519   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:54.721157   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:29:54.730698   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:54.730769   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:54.740596   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.750450   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:54.750521   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.760582   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:29:54.770551   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:54.770608   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:54.781181   77929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:54.834872   77929 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:54.834950   77929 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:54.982435   77929 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:54.982574   77929 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:54.982675   77929 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:55.208724   77929 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:50.704498   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:53.203270   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.206485   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.210946   77929 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:55.211072   77929 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:55.211180   77929 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:55.211326   77929 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:55.211425   77929 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:55.211546   77929 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:55.211655   77929 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:55.211746   77929 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:55.211831   77929 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:55.211932   77929 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:55.212028   77929 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:55.212076   77929 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:55.212150   77929 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:55.456090   77929 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:55.747103   77929 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:55.940962   77929 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:56.076850   77929 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:56.253326   77929 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:56.253921   77929 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:56.259311   77929 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:53.402196   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:53.902328   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.402353   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.901736   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.401514   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.902415   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.402371   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.902117   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.401817   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.902050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.402034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.574005   77634 kubeadm.go:1107] duration metric: took 12.880033802s to wait for elevateKubeSystemPrivileges
	W0422 18:29:58.574051   77634 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:29:58.574061   77634 kubeadm.go:393] duration metric: took 5m16.036878933s to StartCluster
	I0422 18:29:58.574083   77634 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.574173   77634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:29:58.576621   77634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.576908   77634 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:29:58.578444   77634 out.go:177] * Verifying Kubernetes components...
	I0422 18:29:58.576967   77634 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:29:58.577120   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:29:58.579836   77634 addons.go:69] Setting default-storageclass=true in profile "embed-certs-782377"
	I0422 18:29:58.579846   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:29:58.579850   77634 addons.go:69] Setting metrics-server=true in profile "embed-certs-782377"
	I0422 18:29:58.579873   77634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-782377"
	I0422 18:29:58.579896   77634 addons.go:234] Setting addon metrics-server=true in "embed-certs-782377"
	W0422 18:29:58.579910   77634 addons.go:243] addon metrics-server should already be in state true
	I0422 18:29:58.579952   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.579841   77634 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-782377"
	I0422 18:29:58.580057   77634 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-782377"
	W0422 18:29:58.580070   77634 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:29:58.580099   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.580279   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580284   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580301   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580308   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580460   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580488   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.603276   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0422 18:29:58.603459   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0422 18:29:58.603483   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 18:29:58.607248   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607265   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607392   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607836   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.607853   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.607983   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.608001   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.608344   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608373   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608505   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.608932   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.608963   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612034   77634 addons.go:234] Setting addon default-storageclass=true in "embed-certs-782377"
	W0422 18:29:58.612056   77634 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:29:58.612084   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.612467   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.612485   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612786   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.612802   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.613185   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.613700   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.613728   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.630170   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0422 18:29:58.630586   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.631061   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.631081   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.631523   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.631693   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.631847   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0422 18:29:58.632457   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.632941   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.632966   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.633179   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0422 18:29:58.633322   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.633567   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.633688   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.635830   77634 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:29:58.633856   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.634354   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.636961   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.637004   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:29:58.637027   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:29:58.637045   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.637006   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.637294   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.637508   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.639287   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.640999   77634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:29:58.640236   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:56.261447   77929 out.go:204]   - Booting up control plane ...
	I0422 18:29:56.261539   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:56.261635   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:56.261736   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:56.285519   77929 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:56.285675   77929 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:56.285752   77929 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:56.437635   77929 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:56.437767   77929 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:56.944001   77929 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 506.500244ms
	I0422 18:29:56.944104   77929 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:58.640741   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.642428   77634 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.641034   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.642448   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:29:58.642456   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.642470   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.642590   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.642733   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.642860   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.645684   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646424   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.646469   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646728   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.646929   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.647079   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.647331   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.657385   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 18:29:58.658062   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.658658   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.658676   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.659065   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.659314   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.661001   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.661274   77634 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:58.661292   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:29:58.661309   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.664551   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.665029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665185   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.665397   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.665560   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.665692   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.840086   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:29:58.872963   77634 node_ready.go:35] waiting up to 6m0s for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882942   77634 node_ready.go:49] node "embed-certs-782377" has status "Ready":"True"
	I0422 18:29:58.882978   77634 node_ready.go:38] duration metric: took 9.978929ms for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882990   77634 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:58.892484   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:29:58.964679   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.987690   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:59.001748   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:29:59.001776   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:29:59.095009   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:29:59.095039   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:29:59.242427   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.242451   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:29:59.321464   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.989825   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025095721s)
	I0422 18:29:59.989883   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.989895   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.989828   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.002098611s)
	I0422 18:29:59.989974   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990005   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990193   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990231   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990239   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990247   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990254   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990306   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990341   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990355   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990369   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990380   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990504   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990523   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990572   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990588   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.025628   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.025655   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.025970   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.025991   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.434245   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.434287   77634 pod_ready.go:81] duration metric: took 1.54176792s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.434301   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454521   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.454545   77634 pod_ready.go:81] duration metric: took 20.235494ms for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454557   77634 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.473166   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.151631277s)
	I0422 18:30:00.473225   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473266   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473625   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.473660   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.473683   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.473706   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473719   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473998   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.474079   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.474098   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.474114   77634 addons.go:470] Verifying addon metrics-server=true in "embed-certs-782377"
	I0422 18:30:00.476224   77634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:29:57.706757   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.206098   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.477945   77634 addons.go:505] duration metric: took 1.900979481s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:30:00.493925   77634 pod_ready.go:92] pod "etcd-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.493956   77634 pod_ready.go:81] duration metric: took 39.391277ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.493971   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502733   77634 pod_ready.go:92] pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.502762   77634 pod_ready.go:81] duration metric: took 8.782315ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502776   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517227   77634 pod_ready.go:92] pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.517249   77634 pod_ready.go:81] duration metric: took 14.465418ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517260   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881221   77634 pod_ready.go:92] pod "kube-proxy-6qsdm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.881245   77634 pod_ready.go:81] duration metric: took 363.979231ms for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881254   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277017   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:01.277103   77634 pod_ready.go:81] duration metric: took 395.840808ms for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277125   77634 pod_ready.go:38] duration metric: took 2.394112246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:01.277153   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:01.277240   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:01.295278   77634 api_server.go:72] duration metric: took 2.718332063s to wait for apiserver process to appear ...
	I0422 18:30:01.295316   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:01.295345   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:30:01.299754   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:30:01.300888   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:01.300912   77634 api_server.go:131] duration metric: took 5.588825ms to wait for apiserver health ...
	I0422 18:30:01.300920   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:01.480184   77634 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:01.480216   77634 system_pods.go:61] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.480220   77634 system_pods.go:61] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.480224   77634 system_pods.go:61] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.480227   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.480231   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.480234   77634 system_pods.go:61] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.480237   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.480243   77634 system_pods.go:61] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.480246   77634 system_pods.go:61] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.480253   77634 system_pods.go:74] duration metric: took 179.327678ms to wait for pod list to return data ...
	I0422 18:30:01.480260   77634 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:01.676749   77634 default_sa.go:45] found service account: "default"
	I0422 18:30:01.676792   77634 default_sa.go:55] duration metric: took 196.525393ms for default service account to be created ...
	I0422 18:30:01.676805   77634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:01.881811   77634 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:01.881846   77634 system_pods.go:89] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.881852   77634 system_pods.go:89] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.881856   77634 system_pods.go:89] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.881861   77634 system_pods.go:89] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.881866   77634 system_pods.go:89] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.881871   77634 system_pods.go:89] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.881875   77634 system_pods.go:89] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.881884   77634 system_pods.go:89] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.881891   77634 system_pods.go:89] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.881902   77634 system_pods.go:126] duration metric: took 205.08856ms to wait for k8s-apps to be running ...
	I0422 18:30:01.881915   77634 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:01.881971   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:01.898653   77634 system_svc.go:56] duration metric: took 16.727076ms WaitForService to wait for kubelet
	I0422 18:30:01.898688   77634 kubeadm.go:576] duration metric: took 3.321747224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:01.898716   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:02.079527   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:02.079552   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:02.079567   77634 node_conditions.go:105] duration metric: took 180.844523ms to run NodePressure ...
	I0422 18:30:02.079581   77634 start.go:240] waiting for startup goroutines ...
	I0422 18:30:02.079590   77634 start.go:245] waiting for cluster config update ...
	I0422 18:30:02.079603   77634 start.go:254] writing updated cluster config ...
	I0422 18:30:02.079881   77634 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:02.131965   77634 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:02.133816   77634 out.go:177] * Done! kubectl is now configured to use "embed-certs-782377" cluster and "default" namespace by default
	I0422 18:30:02.446649   77929 kubeadm.go:309] [api-check] The API server is healthy after 5.502662802s
	I0422 18:30:02.466311   77929 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:02.504029   77929 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:02.586946   77929 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:02.587250   77929 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-856422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:02.600362   77929 kubeadm.go:309] [bootstrap-token] Using token: f03yx2.2vmzf4rav70vm6gm
	I0422 18:30:02.601830   77929 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:02.601961   77929 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:02.608688   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:02.621264   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:02.625695   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:02.630424   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:02.639203   77929 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:02.856167   77929 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:03.309505   77929 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:03.855419   77929 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:03.855443   77929 kubeadm.go:309] 
	I0422 18:30:03.855541   77929 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:03.855567   77929 kubeadm.go:309] 
	I0422 18:30:03.855643   77929 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:03.855653   77929 kubeadm.go:309] 
	I0422 18:30:03.855688   77929 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:03.855756   77929 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:03.855841   77929 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:03.855854   77929 kubeadm.go:309] 
	I0422 18:30:03.855909   77929 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:03.855915   77929 kubeadm.go:309] 
	I0422 18:30:03.855954   77929 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:03.855960   77929 kubeadm.go:309] 
	I0422 18:30:03.856051   77929 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:03.856171   77929 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:03.856248   77929 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:03.856259   77929 kubeadm.go:309] 
	I0422 18:30:03.856390   77929 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:03.856484   77929 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:03.856496   77929 kubeadm.go:309] 
	I0422 18:30:03.856636   77929 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.856729   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:03.856749   77929 kubeadm.go:309] 	--control-plane 
	I0422 18:30:03.856755   77929 kubeadm.go:309] 
	I0422 18:30:03.856823   77929 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:03.856829   77929 kubeadm.go:309] 
	I0422 18:30:03.856911   77929 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.857040   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:03.857540   77929 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:03.857569   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:30:03.857583   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:03.859350   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:03.860736   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:03.873189   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:03.897193   77929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:03.897260   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:03.897317   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-856422 minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=default-k8s-diff-port-856422 minikube.k8s.io/primary=true
	I0422 18:30:04.114339   77929 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:04.114499   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:02.703452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.705502   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.615355   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.115530   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.614776   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.114991   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.614772   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.114921   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.614799   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.115218   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.614688   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:09.114578   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.203762   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.704636   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.615201   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.115526   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.614511   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.115041   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.615220   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.115463   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.614937   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.115470   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.615417   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:14.114916   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:11.706452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.203931   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.614582   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.115466   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.615542   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.115554   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.614586   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.114645   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.614945   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.769793   77929 kubeadm.go:1107] duration metric: took 13.872592974s to wait for elevateKubeSystemPrivileges
	W0422 18:30:17.769857   77929 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:30:17.769869   77929 kubeadm.go:393] duration metric: took 5m15.279261637s to StartCluster
	I0422 18:30:17.769889   77929 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.769958   77929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:30:17.771921   77929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.772222   77929 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:30:17.774219   77929 out.go:177] * Verifying Kubernetes components...
	I0422 18:30:17.772365   77929 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:30:17.772496   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:30:17.776231   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:30:17.776249   77929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776267   77929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776294   77929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776307   77929 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:30:17.776321   77929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-856422"
	I0422 18:30:17.776343   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776284   77929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776412   77929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776430   77929 addons.go:243] addon metrics-server should already be in state true
	I0422 18:30:17.776469   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776775   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776809   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776778   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776846   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776777   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776926   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.796665   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0422 18:30:17.796701   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0422 18:30:17.796976   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0422 18:30:17.797083   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797472   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797609   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797795   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.797824   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798111   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798141   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798158   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798499   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798627   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798648   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798728   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.798776   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799001   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.799077   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.799107   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799274   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.803095   77929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.803141   77929 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:30:17.803175   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.803544   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.803580   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.820753   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0422 18:30:17.821272   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.821822   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.821839   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.822247   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.822315   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0422 18:30:17.822640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.823287   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0422 18:30:17.823373   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.823976   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.824141   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824152   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824479   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824498   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824561   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.824727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.825176   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.825646   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.825675   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.826014   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.828122   77929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:30:17.826808   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.829694   77929 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:17.829711   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:30:17.829729   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.831322   77929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:30:17.834942   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:30:17.834959   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:30:17.834979   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.833531   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.832894   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835054   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.835078   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.835468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.835674   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.837838   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838180   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.838204   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838459   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.838656   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.838827   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.838983   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.844804   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0422 18:30:17.845252   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.845762   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.845780   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.846071   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.846240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.847881   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.848127   77929 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:17.848142   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:30:17.848159   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.850959   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851369   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.851389   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.851786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.851918   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.852081   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.997608   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:30:18.066476   77929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.139937   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:18.141619   77929 node_ready.go:49] node "default-k8s-diff-port-856422" has status "Ready":"True"
	I0422 18:30:18.141645   77929 node_ready.go:38] duration metric: took 75.13675ms for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.141658   77929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:18.168289   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:18.217351   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:30:18.217374   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:30:18.280089   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:18.283704   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:30:18.283734   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:30:18.314907   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.314936   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:30:18.379950   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.595931   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.595969   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596350   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596374   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.596389   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596660   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596699   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596722   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610244   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.610277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.610614   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.610635   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610659   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:19.513892   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233747961s)
	I0422 18:30:19.513948   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.513961   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514326   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.514460   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.514491   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.514506   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514414   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517601   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.517617   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.805551   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425552646s)
	I0422 18:30:19.805610   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.805621   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.805986   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.806040   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.806064   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.806083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.807818   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.807865   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.807874   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.807889   77929 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-856422"
	I0422 18:30:19.809871   77929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0422 18:30:15.697614   77400 pod_ready.go:81] duration metric: took 4m0.000479463s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	E0422 18:30:15.697661   77400 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:30:15.697678   77400 pod_ready.go:38] duration metric: took 4m9.017394523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:15.697704   77400 kubeadm.go:591] duration metric: took 4m15.772560858s to restartPrimaryControlPlane
	W0422 18:30:15.697751   77400 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:30:15.697777   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:30:19.811644   77929 addons.go:505] duration metric: took 2.039289124s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0422 18:30:20.174912   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:20.675213   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.675247   77929 pod_ready.go:81] duration metric: took 2.506921343s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.675261   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681665   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.681690   77929 pod_ready.go:81] duration metric: took 6.421217ms for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681700   77929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687893   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.687926   77929 pod_ready.go:81] duration metric: took 6.218166ms for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687941   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696603   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.696634   77929 pod_ready.go:81] duration metric: took 8.684682ms for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696649   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702776   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.702800   77929 pod_ready.go:81] duration metric: took 6.141484ms for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702813   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073451   77929 pod_ready.go:92] pod "kube-proxy-4m8cm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.073485   77929 pod_ready.go:81] duration metric: took 370.663669ms for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073500   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474144   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.474175   77929 pod_ready.go:81] duration metric: took 400.665802ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474190   77929 pod_ready.go:38] duration metric: took 3.332515716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:21.474207   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:21.474273   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:21.491320   77929 api_server.go:72] duration metric: took 3.719060391s to wait for apiserver process to appear ...
	I0422 18:30:21.491352   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:21.491378   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:30:21.496589   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:30:21.497405   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:21.497426   77929 api_server.go:131] duration metric: took 6.067469ms to wait for apiserver health ...
	I0422 18:30:21.497433   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:21.675885   77929 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:21.675912   77929 system_pods.go:61] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:21.675916   77929 system_pods.go:61] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:21.675924   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:21.675928   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:21.675932   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:21.675935   77929 system_pods.go:61] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:21.675939   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:21.675945   77929 system_pods.go:61] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:21.675949   77929 system_pods.go:61] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:21.675959   77929 system_pods.go:74] duration metric: took 178.519985ms to wait for pod list to return data ...
	I0422 18:30:21.675965   77929 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:21.872358   77929 default_sa.go:45] found service account: "default"
	I0422 18:30:21.872382   77929 default_sa.go:55] duration metric: took 196.412252ms for default service account to be created ...
	I0422 18:30:21.872391   77929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:22.075660   77929 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:22.075689   77929 system_pods.go:89] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:22.075694   77929 system_pods.go:89] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:22.075698   77929 system_pods.go:89] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:22.075702   77929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:22.075706   77929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:22.075710   77929 system_pods.go:89] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:22.075714   77929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:22.075722   77929 system_pods.go:89] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:22.075726   77929 system_pods.go:89] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:22.075735   77929 system_pods.go:126] duration metric: took 203.339608ms to wait for k8s-apps to be running ...
	I0422 18:30:22.075742   77929 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:22.075785   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:22.091186   77929 system_svc.go:56] duration metric: took 15.433207ms WaitForService to wait for kubelet
	I0422 18:30:22.091219   77929 kubeadm.go:576] duration metric: took 4.318966383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:22.091237   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:22.272944   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:22.272971   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:22.272980   77929 node_conditions.go:105] duration metric: took 181.734735ms to run NodePressure ...
	I0422 18:30:22.272991   77929 start.go:240] waiting for startup goroutines ...
	I0422 18:30:22.273000   77929 start.go:245] waiting for cluster config update ...
	I0422 18:30:22.273010   77929 start.go:254] writing updated cluster config ...
	I0422 18:30:22.273248   77929 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:22.323725   77929 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:22.325876   77929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-856422" cluster and "default" namespace by default
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.109960   77400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.41215685s)
	I0422 18:30:48.110037   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:48.127246   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:30:48.138280   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:30:48.148521   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:30:48.148545   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:30:48.148588   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:30:48.160411   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:30:48.160483   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:30:48.170748   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:30:48.180399   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:30:48.180451   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:30:48.192521   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.202200   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:30:48.202274   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.212241   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:30:48.221754   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:30:48.221821   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:30:48.231555   77400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:30:48.456873   77400 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:57.943980   77400 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:30:57.944080   77400 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:30:57.944182   77400 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:30:57.944305   77400 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:30:57.944411   77400 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:30:57.944499   77400 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:30:57.946110   77400 out.go:204]   - Generating certificates and keys ...
	I0422 18:30:57.946192   77400 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:30:57.946262   77400 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:30:57.946385   77400 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:30:57.946464   77400 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:30:57.946559   77400 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:30:57.946683   77400 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:30:57.946772   77400 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:30:57.946835   77400 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:30:57.946902   77400 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:30:57.946963   77400 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:30:57.947000   77400 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:30:57.947054   77400 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:30:57.947116   77400 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:30:57.947201   77400 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:30:57.947283   77400 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:30:57.947383   77400 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:30:57.947458   77400 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:30:57.947589   77400 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:30:57.947662   77400 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:30:57.949092   77400 out.go:204]   - Booting up control plane ...
	I0422 18:30:57.949194   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:30:57.949279   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:30:57.949336   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:30:57.949419   77400 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:30:57.949505   77400 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:30:57.949544   77400 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:30:57.949664   77400 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:30:57.949739   77400 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:30:57.949794   77400 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.588061ms
	I0422 18:30:57.949862   77400 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:30:57.949957   77400 kubeadm.go:309] [api-check] The API server is healthy after 5.510546703s
	I0422 18:30:57.950048   77400 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:57.950152   77400 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:57.950204   77400 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:57.950352   77400 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-407991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:57.950453   77400 kubeadm.go:309] [bootstrap-token] Using token: cwotot.4qmmrydp0nd6w5tq
	I0422 18:30:57.951938   77400 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:57.952040   77400 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:57.952134   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:57.952285   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:57.952410   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:57.952535   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:57.952666   77400 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:57.952799   77400 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:57.952867   77400 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:57.952936   77400 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:57.952952   77400 kubeadm.go:309] 
	I0422 18:30:57.953013   77400 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:57.953019   77400 kubeadm.go:309] 
	I0422 18:30:57.953084   77400 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:57.953090   77400 kubeadm.go:309] 
	I0422 18:30:57.953110   77400 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:57.953199   77400 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:57.953281   77400 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:57.953289   77400 kubeadm.go:309] 
	I0422 18:30:57.953374   77400 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:57.953381   77400 kubeadm.go:309] 
	I0422 18:30:57.953453   77400 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:57.953461   77400 kubeadm.go:309] 
	I0422 18:30:57.953538   77400 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:57.953636   77400 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:57.953719   77400 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:57.953726   77400 kubeadm.go:309] 
	I0422 18:30:57.953813   77400 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:57.953919   77400 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:57.953930   77400 kubeadm.go:309] 
	I0422 18:30:57.954047   77400 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954187   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:57.954222   77400 kubeadm.go:309] 	--control-plane 
	I0422 18:30:57.954232   77400 kubeadm.go:309] 
	I0422 18:30:57.954364   77400 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:57.954374   77400 kubeadm.go:309] 
	I0422 18:30:57.954440   77400 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954553   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:57.954574   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:30:57.954583   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:57.956278   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:57.957592   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:57.970080   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:57.991711   77400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:57.991779   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:57.991780   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-407991 minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=no-preload-407991 minikube.k8s.io/primary=true
	I0422 18:30:58.232025   77400 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:58.232162   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:58.732395   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.232855   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.732187   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.232654   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.732995   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.232856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.732735   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.232474   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.732930   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.232411   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.732457   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.232888   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.732856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.232873   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.733177   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.232682   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.733241   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.232711   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.732922   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.232815   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.732377   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.232576   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.732243   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.232350   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.732764   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.232338   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.357414   77400 kubeadm.go:1107] duration metric: took 13.365692776s to wait for elevateKubeSystemPrivileges
	W0422 18:31:11.357460   77400 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:31:11.357472   77400 kubeadm.go:393] duration metric: took 5m11.48385131s to StartCluster
	I0422 18:31:11.357493   77400 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.357565   77400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:31:11.359176   77400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.359391   77400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:31:11.360948   77400 out.go:177] * Verifying Kubernetes components...
	I0422 18:31:11.359461   77400 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:31:11.359641   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:31:11.362433   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:31:11.362446   77400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-407991"
	I0422 18:31:11.362464   77400 addons.go:69] Setting default-storageclass=true in profile "no-preload-407991"
	I0422 18:31:11.362486   77400 addons.go:69] Setting metrics-server=true in profile "no-preload-407991"
	I0422 18:31:11.362495   77400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-407991"
	I0422 18:31:11.362500   77400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-407991"
	I0422 18:31:11.362515   77400 addons.go:234] Setting addon metrics-server=true in "no-preload-407991"
	W0422 18:31:11.362527   77400 addons.go:243] addon metrics-server should already be in state true
	W0422 18:31:11.362506   77400 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:31:11.362557   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362567   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362929   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362932   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362963   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362971   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362974   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.363144   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.379089   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0422 18:31:11.379582   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.380121   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.380145   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.380496   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.381098   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.381132   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.383229   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 18:31:11.383513   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0422 18:31:11.383642   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.383977   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.384136   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384148   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384552   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.384754   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384770   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384801   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.385103   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.386102   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.386130   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.388554   77400 addons.go:234] Setting addon default-storageclass=true in "no-preload-407991"
	W0422 18:31:11.388569   77400 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:31:11.388589   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.388921   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.388938   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.401669   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0422 18:31:11.402268   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.402852   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.402869   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.403427   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.403610   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.404849   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 18:31:11.405356   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.405588   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.406112   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.406129   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.407696   77400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:31:11.406649   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.409174   77400 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.409195   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:31:11.409214   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.409261   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.411378   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.412836   77400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:31:11.411939   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0422 18:31:11.414011   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:31:11.414027   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:31:11.413155   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.414045   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.414069   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.413487   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.414097   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.413841   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.414686   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.414781   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.414794   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.414871   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.415256   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.415607   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.416288   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.416343   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.417257   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417623   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.417644   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417898   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.418074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.418325   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.418468   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.432218   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0422 18:31:11.432682   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.433096   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.433108   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.433685   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.433887   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.435675   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.435931   77400 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.435952   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:31:11.435969   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.438700   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439107   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.439144   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439237   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.439482   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.439662   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.439833   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.610190   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:31:11.654061   77400 node_ready.go:35] waiting up to 6m0s for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663869   77400 node_ready.go:49] node "no-preload-407991" has status "Ready":"True"
	I0422 18:31:11.663904   77400 node_ready.go:38] duration metric: took 9.806821ms for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663917   77400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:11.673895   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:11.752785   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.770023   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:31:11.770054   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:31:11.799895   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.872083   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:31:11.872113   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:31:11.984597   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:11.984626   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:31:12.059137   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:13.130584   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330646778s)
	I0422 18:31:13.130694   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130718   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.130716   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37789401s)
	I0422 18:31:13.130833   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130847   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131067   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131135   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131159   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131172   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131289   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131304   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131312   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131319   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131327   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.131559   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131574   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131601   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131621   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131621   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.173181   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.173205   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.173478   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.173501   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.279764   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.220585481s)
	I0422 18:31:13.279813   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.279828   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280221   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280241   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280261   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280276   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.280290   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280532   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280570   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280577   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280586   77400 addons.go:470] Verifying addon metrics-server=true in "no-preload-407991"
	I0422 18:31:13.282757   77400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:31:13.284029   77400 addons.go:505] duration metric: took 1.924572004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:31:13.681968   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.682004   77400 pod_ready.go:81] duration metric: took 2.008061657s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.682017   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687240   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.687268   77400 pod_ready.go:81] duration metric: took 5.242949ms for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687281   77400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693047   77400 pod_ready.go:92] pod "etcd-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.693074   77400 pod_ready.go:81] duration metric: took 5.784769ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693086   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705008   77400 pod_ready.go:92] pod "kube-apiserver-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.705028   77400 pod_ready.go:81] duration metric: took 11.934672ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705037   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721814   77400 pod_ready.go:92] pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.721840   77400 pod_ready.go:81] duration metric: took 16.796546ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721855   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079660   77400 pod_ready.go:92] pod "kube-proxy-47g8k" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.079681   77400 pod_ready.go:81] duration metric: took 357.819791ms for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079692   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480000   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.480026   77400 pod_ready.go:81] duration metric: took 400.326493ms for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480037   77400 pod_ready.go:38] duration metric: took 2.816106046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:14.480054   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:31:14.480123   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:31:14.508798   77400 api_server.go:72] duration metric: took 3.149365253s to wait for apiserver process to appear ...
	I0422 18:31:14.508822   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:31:14.508842   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:31:14.523293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:31:14.524410   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:31:14.524439   77400 api_server.go:131] duration metric: took 15.608906ms to wait for apiserver health ...
	I0422 18:31:14.524448   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:31:14.682120   77400 system_pods.go:59] 9 kube-system pods found
	I0422 18:31:14.682152   77400 system_pods.go:61] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:14.682157   77400 system_pods.go:61] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:14.682161   77400 system_pods.go:61] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:14.682164   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:14.682169   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:14.682173   77400 system_pods.go:61] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:14.682178   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:14.682188   77400 system_pods.go:61] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:14.682194   77400 system_pods.go:61] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:14.682205   77400 system_pods.go:74] duration metric: took 157.750249ms to wait for pod list to return data ...
	I0422 18:31:14.682222   77400 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:31:14.878556   77400 default_sa.go:45] found service account: "default"
	I0422 18:31:14.878581   77400 default_sa.go:55] duration metric: took 196.353021ms for default service account to be created ...
	I0422 18:31:14.878590   77400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:31:15.081385   77400 system_pods.go:86] 9 kube-system pods found
	I0422 18:31:15.081415   77400 system_pods.go:89] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:15.081425   77400 system_pods.go:89] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:15.081430   77400 system_pods.go:89] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:15.081434   77400 system_pods.go:89] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:15.081438   77400 system_pods.go:89] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:15.081448   77400 system_pods.go:89] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:15.081452   77400 system_pods.go:89] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:15.081458   77400 system_pods.go:89] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:15.081464   77400 system_pods.go:89] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:15.081476   77400 system_pods.go:126] duration metric: took 202.881032ms to wait for k8s-apps to be running ...
	I0422 18:31:15.081484   77400 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:31:15.081530   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:15.098245   77400 system_svc.go:56] duration metric: took 16.748933ms WaitForService to wait for kubelet
	I0422 18:31:15.098278   77400 kubeadm.go:576] duration metric: took 3.738847086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:31:15.098302   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:31:15.278812   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:31:15.278839   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:31:15.278848   77400 node_conditions.go:105] duration metric: took 180.541553ms to run NodePressure ...
	I0422 18:31:15.278859   77400 start.go:240] waiting for startup goroutines ...
	I0422 18:31:15.278866   77400 start.go:245] waiting for cluster config update ...
	I0422 18:31:15.278875   77400 start.go:254] writing updated cluster config ...
	I0422 18:31:15.279242   77400 ssh_runner.go:195] Run: rm -f paused
	I0422 18:31:15.330788   77400 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:31:15.333274   77400 out.go:177] * Done! kubectl is now configured to use "no-preload-407991" cluster and "default" namespace by default
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.632050469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811351632017349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0488c15-4e80-4472-b1be-1a37025a5153 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.633531382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67d2c8f8-fd7e-4f09-b1a2-d8fd32711578 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.633596243Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67d2c8f8-fd7e-4f09-b1a2-d8fd32711578 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.633648165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=67d2c8f8-fd7e-4f09-b1a2-d8fd32711578 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.670617981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=86870988-9eab-46fd-803b-6efbe7e95019 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.670723327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=86870988-9eab-46fd-803b-6efbe7e95019 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.672043456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c832bc65-ca46-409b-82a1-0b6d2c7f6c47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.672509112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811351672485525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c832bc65-ca46-409b-82a1-0b6d2c7f6c47 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.673108060Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30e3231f-b239-4ddf-aae6-040482c2aaee name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.673157657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30e3231f-b239-4ddf-aae6-040482c2aaee name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.673199848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=30e3231f-b239-4ddf-aae6-040482c2aaee name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.707507028Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31c1c974-d70e-4bd6-9b00-2afbac6e8483 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.707588625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31c1c974-d70e-4bd6-9b00-2afbac6e8483 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.708909831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08a44709-5c5e-4840-afbc-fb2e0d6d3302 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.709286908Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811351709266809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08a44709-5c5e-4840-afbc-fb2e0d6d3302 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.709845679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6daf0f4-e64f-4ca6-a474-92461b7350cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.709915899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6daf0f4-e64f-4ca6-a474-92461b7350cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.709946115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f6daf0f4-e64f-4ca6-a474-92461b7350cd name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.748326630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff2d6e79-e4b5-48ae-b06b-e582e4162077 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.748472406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff2d6e79-e4b5-48ae-b06b-e582e4162077 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.750559254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72b445cc-336e-4eb6-a2c3-41a237ebcb0c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.750957252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811351750930436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72b445cc-336e-4eb6-a2c3-41a237ebcb0c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.751747155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42fa8425-47d9-46ea-b336-350eb4406272 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.751810164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42fa8425-47d9-46ea-b336-350eb4406272 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:42:31 old-k8s-version-367072 crio[648]: time="2024-04-22 18:42:31.751840668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=42fa8425-47d9-46ea-b336-350eb4406272 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr22 18:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054750] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043660] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr22 18:25] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922715] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.744071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.637131] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061682] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.221839] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.164619] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.287340] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +7.158439] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.071484] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.066379] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[ +11.632913] kauditd_printk_skb: 46 callbacks suppressed
	[Apr22 18:29] systemd-fstab-generator[4961]: Ignoring "noauto" option for root device
	[Apr22 18:31] systemd-fstab-generator[5238]: Ignoring "noauto" option for root device
	[  +0.069844] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:42:31 up 17 min,  0 users,  load average: 0.01, 0.09, 0.07
	Linux old-k8s-version-367072 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc00091f170)
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: goroutine 159 [select]:
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cf3ef0, 0x4f0ac20, 0xc0001198b0, 0x1, 0xc0001000c0)
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254540, 0xc0001000c0)
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00096ce50, 0xc000bece40)
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6423]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 22 18:42:27 old-k8s-version-367072 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 22 18:42:27 old-k8s-version-367072 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 22 18:42:27 old-k8s-version-367072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 22 18:42:27 old-k8s-version-367072 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 22 18:42:27 old-k8s-version-367072 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6432]: I0422 18:42:27.786326    6432 server.go:416] Version: v1.20.0
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6432]: I0422 18:42:27.786688    6432 server.go:837] Client rotation is on, will bootstrap in background
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6432]: I0422 18:42:27.788710    6432 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6432]: W0422 18:42:27.789823    6432 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 22 18:42:27 old-k8s-version-367072 kubelet[6432]: I0422 18:42:27.789958    6432 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (252.646353ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-367072" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782377 -n embed-certs-782377
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-22 18:45:28.261669149 +0000 UTC m=+6512.991282769
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-782377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-782377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.717µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-782377 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-782377 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-782377 logs -n 25: (1.723742701s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC | 22 Apr 24 18:45 UTC |
	| start   | -p newest-cni-505212 --memory=2200 --alsologtostderr   | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:45:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:45:10.112916   84518 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:45:10.113190   84518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:45:10.113203   84518 out.go:304] Setting ErrFile to fd 2...
	I0422 18:45:10.113209   84518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:45:10.113476   84518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:45:10.114094   84518 out.go:298] Setting JSON to false
	I0422 18:45:10.115020   84518 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8855,"bootTime":1713802655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:45:10.115081   84518 start.go:139] virtualization: kvm guest
	I0422 18:45:10.117465   84518 out.go:177] * [newest-cni-505212] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:45:10.118909   84518 notify.go:220] Checking for updates...
	I0422 18:45:10.118917   84518 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:45:10.120251   84518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:45:10.121708   84518 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:45:10.123073   84518 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:45:10.124539   84518 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:45:10.125871   84518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:45:10.127954   84518 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:45:10.128103   84518 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:45:10.128239   84518 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:45:10.128384   84518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:45:10.167266   84518 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:45:10.168686   84518 start.go:297] selected driver: kvm2
	I0422 18:45:10.168708   84518 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:45:10.168735   84518 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:45:10.169596   84518 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:45:10.169660   84518 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:45:10.185116   84518 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:45:10.185194   84518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0422 18:45:10.185238   84518 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0422 18:45:10.185616   84518 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0422 18:45:10.185689   84518 cni.go:84] Creating CNI manager for ""
	I0422 18:45:10.185706   84518 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:45:10.185719   84518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 18:45:10.185811   84518 start.go:340] cluster config:
	{Name:newest-cni-505212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-505212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:45:10.185966   84518 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:45:10.187822   84518 out.go:177] * Starting "newest-cni-505212" primary control-plane node in "newest-cni-505212" cluster
	I0422 18:45:10.188993   84518 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:45:10.189038   84518 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:45:10.189054   84518 cache.go:56] Caching tarball of preloaded images
	I0422 18:45:10.189148   84518 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:45:10.189161   84518 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:45:10.189260   84518 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/newest-cni-505212/config.json ...
	I0422 18:45:10.189282   84518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/newest-cni-505212/config.json: {Name:mkd111e05cfb582e9c3b193258ee98577aa32be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:45:10.189444   84518 start.go:360] acquireMachinesLock for newest-cni-505212: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:45:10.189485   84518 start.go:364] duration metric: took 23.945µs to acquireMachinesLock for "newest-cni-505212"
	I0422 18:45:10.189507   84518 start.go:93] Provisioning new machine with config: &{Name:newest-cni-505212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:newest-cni-505212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:45:10.189621   84518 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:45:10.191272   84518 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 18:45:10.191442   84518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:45:10.191495   84518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:45:10.206981   84518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0422 18:45:10.207438   84518 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:45:10.208114   84518 main.go:141] libmachine: Using API Version  1
	I0422 18:45:10.208141   84518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:45:10.208494   84518 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:45:10.208685   84518 main.go:141] libmachine: (newest-cni-505212) Calling .GetMachineName
	I0422 18:45:10.208826   84518 main.go:141] libmachine: (newest-cni-505212) Calling .DriverName
	I0422 18:45:10.209044   84518 start.go:159] libmachine.API.Create for "newest-cni-505212" (driver="kvm2")
	I0422 18:45:10.209125   84518 client.go:168] LocalClient.Create starting
	I0422 18:45:10.209158   84518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 18:45:10.209190   84518 main.go:141] libmachine: Decoding PEM data...
	I0422 18:45:10.209208   84518 main.go:141] libmachine: Parsing certificate...
	I0422 18:45:10.209265   84518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 18:45:10.209289   84518 main.go:141] libmachine: Decoding PEM data...
	I0422 18:45:10.209300   84518 main.go:141] libmachine: Parsing certificate...
	I0422 18:45:10.209313   84518 main.go:141] libmachine: Running pre-create checks...
	I0422 18:45:10.209321   84518 main.go:141] libmachine: (newest-cni-505212) Calling .PreCreateCheck
	I0422 18:45:10.209759   84518 main.go:141] libmachine: (newest-cni-505212) Calling .GetConfigRaw
	I0422 18:45:10.210188   84518 main.go:141] libmachine: Creating machine...
	I0422 18:45:10.210206   84518 main.go:141] libmachine: (newest-cni-505212) Calling .Create
	I0422 18:45:10.210337   84518 main.go:141] libmachine: (newest-cni-505212) Creating KVM machine...
	I0422 18:45:10.211724   84518 main.go:141] libmachine: (newest-cni-505212) DBG | found existing default KVM network
	I0422 18:45:10.212958   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.212765   84540 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:3f:3b} reservation:<nil>}
	I0422 18:45:10.213804   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.213715   84540 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:03:c9} reservation:<nil>}
	I0422 18:45:10.214661   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.214591   84540 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9c:60:80} reservation:<nil>}
	I0422 18:45:10.215723   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.215636   84540 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289890}
	I0422 18:45:10.215794   84518 main.go:141] libmachine: (newest-cni-505212) DBG | created network xml: 
	I0422 18:45:10.215818   84518 main.go:141] libmachine: (newest-cni-505212) DBG | <network>
	I0422 18:45:10.215849   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   <name>mk-newest-cni-505212</name>
	I0422 18:45:10.215910   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   <dns enable='no'/>
	I0422 18:45:10.215926   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   
	I0422 18:45:10.215940   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0422 18:45:10.215951   84518 main.go:141] libmachine: (newest-cni-505212) DBG |     <dhcp>
	I0422 18:45:10.215983   84518 main.go:141] libmachine: (newest-cni-505212) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0422 18:45:10.215999   84518 main.go:141] libmachine: (newest-cni-505212) DBG |     </dhcp>
	I0422 18:45:10.216007   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   </ip>
	I0422 18:45:10.216029   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   
	I0422 18:45:10.216046   84518 main.go:141] libmachine: (newest-cni-505212) DBG | </network>
	I0422 18:45:10.216054   84518 main.go:141] libmachine: (newest-cni-505212) DBG | 
	I0422 18:45:10.221511   84518 main.go:141] libmachine: (newest-cni-505212) DBG | trying to create private KVM network mk-newest-cni-505212 192.168.72.0/24...
	I0422 18:45:10.295286   84518 main.go:141] libmachine: (newest-cni-505212) DBG | private KVM network mk-newest-cni-505212 192.168.72.0/24 created
	I0422 18:45:10.295322   84518 main.go:141] libmachine: (newest-cni-505212) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212 ...
	I0422 18:45:10.295347   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.295265   84540 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:45:10.295365   84518 main.go:141] libmachine: (newest-cni-505212) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 18:45:10.295471   84518 main.go:141] libmachine: (newest-cni-505212) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 18:45:10.524206   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.524099   84540 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/id_rsa...
	I0422 18:45:10.777495   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.777323   84540 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/newest-cni-505212.rawdisk...
	I0422 18:45:10.777552   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Writing magic tar header
	I0422 18:45:10.777615   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Writing SSH key tar header
	I0422 18:45:10.777661   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.777483   84540 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212 ...
	I0422 18:45:10.777680   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212 (perms=drwx------)
	I0422 18:45:10.777704   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 18:45:10.777715   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 18:45:10.777732   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 18:45:10.777745   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212
	I0422 18:45:10.777753   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 18:45:10.777767   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 18:45:10.777779   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 18:45:10.777787   84518 main.go:141] libmachine: (newest-cni-505212) Creating domain...
	I0422 18:45:10.777797   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:45:10.777805   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 18:45:10.777811   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 18:45:10.777819   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins
	I0422 18:45:10.777825   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home
	I0422 18:45:10.777833   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Skipping /home - not owner
	I0422 18:45:10.778938   84518 main.go:141] libmachine: (newest-cni-505212) define libvirt domain using xml: 
	I0422 18:45:10.778961   84518 main.go:141] libmachine: (newest-cni-505212) <domain type='kvm'>
	I0422 18:45:10.778968   84518 main.go:141] libmachine: (newest-cni-505212)   <name>newest-cni-505212</name>
	I0422 18:45:10.778973   84518 main.go:141] libmachine: (newest-cni-505212)   <memory unit='MiB'>2200</memory>
	I0422 18:45:10.778981   84518 main.go:141] libmachine: (newest-cni-505212)   <vcpu>2</vcpu>
	I0422 18:45:10.778991   84518 main.go:141] libmachine: (newest-cni-505212)   <features>
	I0422 18:45:10.779002   84518 main.go:141] libmachine: (newest-cni-505212)     <acpi/>
	I0422 18:45:10.779013   84518 main.go:141] libmachine: (newest-cni-505212)     <apic/>
	I0422 18:45:10.779021   84518 main.go:141] libmachine: (newest-cni-505212)     <pae/>
	I0422 18:45:10.779030   84518 main.go:141] libmachine: (newest-cni-505212)     
	I0422 18:45:10.779038   84518 main.go:141] libmachine: (newest-cni-505212)   </features>
	I0422 18:45:10.779043   84518 main.go:141] libmachine: (newest-cni-505212)   <cpu mode='host-passthrough'>
	I0422 18:45:10.779050   84518 main.go:141] libmachine: (newest-cni-505212)   
	I0422 18:45:10.779055   84518 main.go:141] libmachine: (newest-cni-505212)   </cpu>
	I0422 18:45:10.779062   84518 main.go:141] libmachine: (newest-cni-505212)   <os>
	I0422 18:45:10.779066   84518 main.go:141] libmachine: (newest-cni-505212)     <type>hvm</type>
	I0422 18:45:10.779074   84518 main.go:141] libmachine: (newest-cni-505212)     <boot dev='cdrom'/>
	I0422 18:45:10.779079   84518 main.go:141] libmachine: (newest-cni-505212)     <boot dev='hd'/>
	I0422 18:45:10.779087   84518 main.go:141] libmachine: (newest-cni-505212)     <bootmenu enable='no'/>
	I0422 18:45:10.779092   84518 main.go:141] libmachine: (newest-cni-505212)   </os>
	I0422 18:45:10.779099   84518 main.go:141] libmachine: (newest-cni-505212)   <devices>
	I0422 18:45:10.779110   84518 main.go:141] libmachine: (newest-cni-505212)     <disk type='file' device='cdrom'>
	I0422 18:45:10.779136   84518 main.go:141] libmachine: (newest-cni-505212)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/boot2docker.iso'/>
	I0422 18:45:10.779157   84518 main.go:141] libmachine: (newest-cni-505212)       <target dev='hdc' bus='scsi'/>
	I0422 18:45:10.779165   84518 main.go:141] libmachine: (newest-cni-505212)       <readonly/>
	I0422 18:45:10.779173   84518 main.go:141] libmachine: (newest-cni-505212)     </disk>
	I0422 18:45:10.779179   84518 main.go:141] libmachine: (newest-cni-505212)     <disk type='file' device='disk'>
	I0422 18:45:10.779187   84518 main.go:141] libmachine: (newest-cni-505212)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 18:45:10.779207   84518 main.go:141] libmachine: (newest-cni-505212)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/newest-cni-505212.rawdisk'/>
	I0422 18:45:10.779215   84518 main.go:141] libmachine: (newest-cni-505212)       <target dev='hda' bus='virtio'/>
	I0422 18:45:10.779221   84518 main.go:141] libmachine: (newest-cni-505212)     </disk>
	I0422 18:45:10.779229   84518 main.go:141] libmachine: (newest-cni-505212)     <interface type='network'>
	I0422 18:45:10.779255   84518 main.go:141] libmachine: (newest-cni-505212)       <source network='mk-newest-cni-505212'/>
	I0422 18:45:10.779285   84518 main.go:141] libmachine: (newest-cni-505212)       <model type='virtio'/>
	I0422 18:45:10.779294   84518 main.go:141] libmachine: (newest-cni-505212)     </interface>
	I0422 18:45:10.779300   84518 main.go:141] libmachine: (newest-cni-505212)     <interface type='network'>
	I0422 18:45:10.779316   84518 main.go:141] libmachine: (newest-cni-505212)       <source network='default'/>
	I0422 18:45:10.779327   84518 main.go:141] libmachine: (newest-cni-505212)       <model type='virtio'/>
	I0422 18:45:10.779335   84518 main.go:141] libmachine: (newest-cni-505212)     </interface>
	I0422 18:45:10.779343   84518 main.go:141] libmachine: (newest-cni-505212)     <serial type='pty'>
	I0422 18:45:10.779351   84518 main.go:141] libmachine: (newest-cni-505212)       <target port='0'/>
	I0422 18:45:10.779358   84518 main.go:141] libmachine: (newest-cni-505212)     </serial>
	I0422 18:45:10.779364   84518 main.go:141] libmachine: (newest-cni-505212)     <console type='pty'>
	I0422 18:45:10.779371   84518 main.go:141] libmachine: (newest-cni-505212)       <target type='serial' port='0'/>
	I0422 18:45:10.779380   84518 main.go:141] libmachine: (newest-cni-505212)     </console>
	I0422 18:45:10.779387   84518 main.go:141] libmachine: (newest-cni-505212)     <rng model='virtio'>
	I0422 18:45:10.779411   84518 main.go:141] libmachine: (newest-cni-505212)       <backend model='random'>/dev/random</backend>
	I0422 18:45:10.779429   84518 main.go:141] libmachine: (newest-cni-505212)     </rng>
	I0422 18:45:10.779465   84518 main.go:141] libmachine: (newest-cni-505212)     
	I0422 18:45:10.779482   84518 main.go:141] libmachine: (newest-cni-505212)     
	I0422 18:45:10.779491   84518 main.go:141] libmachine: (newest-cni-505212)   </devices>
	I0422 18:45:10.779500   84518 main.go:141] libmachine: (newest-cni-505212) </domain>
	I0422 18:45:10.779511   84518 main.go:141] libmachine: (newest-cni-505212) 
	I0422 18:45:10.784393   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:e3:23:d8 in network default
	I0422 18:45:10.784977   84518 main.go:141] libmachine: (newest-cni-505212) Ensuring networks are active...
	I0422 18:45:10.785004   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:10.785773   84518 main.go:141] libmachine: (newest-cni-505212) Ensuring network default is active
	I0422 18:45:10.786215   84518 main.go:141] libmachine: (newest-cni-505212) Ensuring network mk-newest-cni-505212 is active
	I0422 18:45:10.786738   84518 main.go:141] libmachine: (newest-cni-505212) Getting domain xml...
	I0422 18:45:10.787599   84518 main.go:141] libmachine: (newest-cni-505212) Creating domain...
	I0422 18:45:12.040257   84518 main.go:141] libmachine: (newest-cni-505212) Waiting to get IP...
	I0422 18:45:12.041090   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:12.041604   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:12.041639   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:12.041591   84540 retry.go:31] will retry after 302.443308ms: waiting for machine to come up
	I0422 18:45:12.346125   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:12.346770   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:12.346795   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:12.346738   84540 retry.go:31] will retry after 336.383544ms: waiting for machine to come up
	I0422 18:45:12.684177   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:12.684660   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:12.684683   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:12.684626   84540 retry.go:31] will retry after 406.194746ms: waiting for machine to come up
	I0422 18:45:13.092322   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:13.092809   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:13.092833   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:13.092782   84540 retry.go:31] will retry after 382.460714ms: waiting for machine to come up
	I0422 18:45:13.477433   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:13.477908   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:13.477933   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:13.477856   84540 retry.go:31] will retry after 604.904054ms: waiting for machine to come up
	I0422 18:45:14.084786   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:14.085339   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:14.085381   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:14.085307   84540 retry.go:31] will retry after 943.058132ms: waiting for machine to come up
	I0422 18:45:15.029471   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:15.029948   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:15.029977   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:15.029891   84540 retry.go:31] will retry after 1.092745482s: waiting for machine to come up
	I0422 18:45:16.124142   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:16.124614   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:16.124641   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:16.124566   84540 retry.go:31] will retry after 1.247361176s: waiting for machine to come up
	I0422 18:45:17.373250   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:17.373690   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:17.373720   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:17.373649   84540 retry.go:31] will retry after 1.782608696s: waiting for machine to come up
	I0422 18:45:19.157944   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:19.158371   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:19.158414   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:19.158314   84540 retry.go:31] will retry after 1.833762676s: waiting for machine to come up
	I0422 18:45:20.994006   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:20.994603   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:20.994630   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:20.994549   84540 retry.go:31] will retry after 2.649935927s: waiting for machine to come up
	I0422 18:45:23.647372   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:23.647834   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:23.647864   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:23.647793   84540 retry.go:31] will retry after 3.367112316s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.409127698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811529409100553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f768d33-e440-4eec-a9fc-a90e6de967b8 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.409767149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e6b95c2-93bb-43b1-8610-103391f9b3a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.409841471Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e6b95c2-93bb-43b1-8610-103391f9b3a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.410624113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e6b95c2-93bb-43b1-8610-103391f9b3a7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.455097862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a09993ab-df56-4a3f-af3c-87f049265856 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.455196779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a09993ab-df56-4a3f-af3c-87f049265856 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.456782223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=338f40ac-8db6-4516-830c-97ffb837c5cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.457275364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811529457252233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=338f40ac-8db6-4516-830c-97ffb837c5cb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.457838385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e66c2e88-7dd4-4779-a4e8-f847ad97695d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.457907914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e66c2e88-7dd4-4779-a4e8-f847ad97695d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.459902102Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e66c2e88-7dd4-4779-a4e8-f847ad97695d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.507250852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87e52eeb-040c-48b2-859f-d9acb1829f4c name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.507357315Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87e52eeb-040c-48b2-859f-d9acb1829f4c name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.508684136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b4a46c4-1134-43b4-b8b1-e07231de0279 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.509170339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811529509145643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b4a46c4-1134-43b4-b8b1-e07231de0279 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.509654783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37831e07-cacb-411e-bf9d-019bcded3391 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.509732949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37831e07-cacb-411e-bf9d-019bcded3391 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.510028894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37831e07-cacb-411e-bf9d-019bcded3391 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.545837491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6543324a-5e33-4612-ba83-f4228f112033 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.545907799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6543324a-5e33-4612-ba83-f4228f112033 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.547458104Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ac78ca2-b921-4bc9-9f5d-56ce86798688 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.547963615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811529547887752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ac78ca2-b921-4bc9-9f5d-56ce86798688 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.548802440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4488c24a-7af1-46c9-922b-3eb9d372f60f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.548852387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4488c24a-7af1-46c9-922b-3eb9d372f60f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:29 embed-certs-782377 crio[724]: time="2024-04-22 18:45:29.549120600Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756,PodSandboxId:2100946ee89aa413efac84ca99794bfcc88ac70f38b63bc6870be46cb696697f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810600605344833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f515603-72e0-4408-9180-1010cf97877d,},Annotations:map[string]string{io.kubernetes.container.hash: 3babdd2,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481,PodSandboxId:6ea02c2d9d9641f2c28cfd089ca047aaa8f507adb03cafa106b03cd32919e1a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599666411683,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-425zd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70c9e268-0ecd-4d68-aac9-b979888bfd95,},Annotations:map[string]string{io.kubernetes.container.hash: 3c0b1d7f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478,PodSandboxId:6182a53be2d9c3a94672db18b929cfd2b0a2c482a879e26b262e6d367d82c2e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810599644451941,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-44bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70
b8e7df-e60e-441c-8249-5eebb9a4409c,},Annotations:map[string]string{io.kubernetes.container.hash: e90eb6fc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732,PodSandboxId:1da8deaadb5a1dbdb2eb11bdbf4eb0d98babede2e54fd780705c46fd9db4a8ce,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810599508016950,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6qsdm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a79875f5-4fdf-4a0e-9bfc-985fda10a906,},Annotations:map[string]string{io.kubernetes.container.hash: 52de6cc2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9,PodSandboxId:0f8b5a120c61e765fb4eeadb5a4cd226a14715e6127604f5b61e07a60e7d6d96,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810579285618138,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f2fafddd9940494ad294a48e8603a8e3,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781,PodSandboxId:1e88cb01978f58a82142aa08799fa49875260a4b703dde9b3c4620fc0b44fe4c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810579259902306,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01f859357e4afdb12fb42a95a16952b1,},Annotations:map[string]string{io.kubernetes.container.hash: 7c0548ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70,PodSandboxId:0510ebb35da7ba545c06f6c9741dfd14eae68b3b8b6545bd49f39e29b1da13cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810579289373847,Labels:map[string]string{io.kubernetes.container.name:
kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bdccc9980979127d4755cbda0fbecd7,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57,PodSandboxId:c3dbb79a78f3eaa146acfa9c3b66a1fbbb2e31e7c5304b63073c775dca4fb70d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810579141646590,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-782377,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73eef8b6c0004e5c37db86236681b5e2,},Annotations:map[string]string{io.kubernetes.container.hash: a89301dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4488c24a-7af1-46c9-922b-3eb9d372f60f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c0185f4d38b02       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   2100946ee89aa       storage-provisioner
	a4d31d6c730b4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   6ea02c2d9d964       coredns-7db6d8ff4d-425zd
	b866a8972ff20       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   6182a53be2d9c       coredns-7db6d8ff4d-44bfz
	0db3e27ffbb20       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   15 minutes ago      Running             kube-proxy                0                   1da8deaadb5a1       kube-proxy-6qsdm
	c5908d6f96605       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   15 minutes ago      Running             kube-controller-manager   2                   0510ebb35da7b       kube-controller-manager-embed-certs-782377
	081ed6bcbd5ca       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   15 minutes ago      Running             kube-scheduler            2                   0f8b5a120c61e       kube-scheduler-embed-certs-782377
	b22085c535e9c       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   1e88cb01978f5       etcd-embed-certs-782377
	01c3e02d8cb9a       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   15 minutes ago      Running             kube-apiserver            2                   c3dbb79a78f3e       kube-apiserver-embed-certs-782377
	
	
	==> coredns [a4d31d6c730b4915524e1737615becbded8d7ef1470c49074c607cc675cef481] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b866a8972ff2092bf73688b2f353c3d2a98321872973620ecf67e2a76fc1b478] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-782377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-782377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=embed-certs-782377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:29:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-782377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:45:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:45:24 +0000   Mon, 22 Apr 2024 18:29:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:45:24 +0000   Mon, 22 Apr 2024 18:29:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:45:24 +0000   Mon, 22 Apr 2024 18:29:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:45:24 +0000   Mon, 22 Apr 2024 18:29:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.114
	  Hostname:    embed-certs-782377
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6e9919cbcdef4481b79ec61d03881f1d
	  System UUID:                6e9919cb-cdef-4481-b79e-c61d03881f1d
	  Boot ID:                    377d73fc-c18b-4f21-a34d-ee8dade6c327
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-425zd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-44bfz                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-782377                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-782377             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-782377    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-6qsdm                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-782377             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-lv49p               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-782377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-782377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-782377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-782377 event: Registered Node embed-certs-782377 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052644] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040412] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.572143] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.740035] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.409896] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.879707] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.116345] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.181736] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.149072] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.308485] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[  +4.597861] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +0.064120] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.204501] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +4.617472] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.448903] kauditd_printk_skb: 79 callbacks suppressed
	[Apr22 18:29] kauditd_printk_skb: 8 callbacks suppressed
	[  +1.629663] systemd-fstab-generator[3576]: Ignoring "noauto" option for root device
	[  +4.468145] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.077990] systemd-fstab-generator[3898]: Ignoring "noauto" option for root device
	[ +13.994816] systemd-fstab-generator[4098]: Ignoring "noauto" option for root device
	[  +0.080595] kauditd_printk_skb: 14 callbacks suppressed
	[Apr22 18:30] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [b22085c535e9cd4279fd23ed46c4aca374e891d2af3d2a71dc748091b2b40781] <==
	{"level":"info","ts":"2024-04-22T18:29:39.770566Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.114:2380"}
	{"level":"info","ts":"2024-04-22T18:29:40.613261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:40.613322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:40.613364Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgPreVoteResp from f0e2ae880f3a35e5 at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:40.613378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.613384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 received MsgVoteResp from f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.613392Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f0e2ae880f3a35e5 became leader at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.61341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f0e2ae880f3a35e5 elected leader f0e2ae880f3a35e5 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:40.615207Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.616507Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f0e2ae880f3a35e5","local-member-attributes":"{Name:embed-certs-782377 ClientURLs:[https://192.168.50.114:2379]}","request-path":"/0/members/f0e2ae880f3a35e5/attributes","cluster-id":"659e1302ad88139d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:29:40.616736Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:40.61718Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:40.617436Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"659e1302ad88139d","local-member-id":"f0e2ae880f3a35e5","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.617524Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.617566Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:40.617993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:40.618027Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:40.619323Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:29:40.629744Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.114:2379"}
	{"level":"info","ts":"2024-04-22T18:39:40.66113Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-04-22T18:39:40.671348Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":682,"took":"9.790584ms","hash":1338261051,"current-db-size-bytes":2146304,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2146304,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-04-22T18:39:40.671419Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1338261051,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-04-22T18:44:40.669524Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":924}
	{"level":"info","ts":"2024-04-22T18:44:40.675305Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":924,"took":"4.547918ms","hash":1073172970,"current-db-size-bytes":2146304,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1515520,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-04-22T18:44:40.675524Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1073172970,"revision":924,"compact-revision":682}
	
	
	==> kernel <==
	 18:45:29 up 21 min,  0 users,  load average: 0.06, 0.10, 0.10
	Linux embed-certs-782377 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [01c3e02d8cb9a596e2fadb304634bd2320580aece83e49a9f8a869a881b70b57] <==
	I0422 18:39:43.162635       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:40:43.162004       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:40:43.162226       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:40:43.162261       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:40:43.163261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:40:43.163430       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:40:43.163443       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:42:43.162398       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:42:43.162501       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:42:43.162514       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:42:43.163840       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:42:43.164018       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:42:43.164056       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:44:42.167392       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:44:42.167713       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0422 18:44:43.168421       1 handler_proxy.go:93] no RequestInfo found in the context
	W0422 18:44:43.168501       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:44:43.168529       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:44:43.168608       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0422 18:44:43.168636       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:44:43.169912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c5908d6f9660552c0bd54dbfa81b5ed68d644a82723bab3079095f081574cb70] <==
	I0422 18:39:58.662888       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:40:28.186458       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:40:28.671806       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:40:58.192407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:40:58.680746       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:41:08.016613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="308.497µs"
	I0422 18:41:23.016787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="70.207µs"
	E0422 18:41:28.198234       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:41:28.688595       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:41:58.204333       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:41:58.697357       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:42:28.210117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:42:28.706083       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:42:58.217362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:42:58.714738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:43:28.223108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:43:28.722511       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:43:58.228895       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:43:58.739091       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:44:28.233435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:44:28.748458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:44:58.239126       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:44:58.758507       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:45:28.245606       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:45:28.766846       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0db3e27ffbb2023ab1a72c7e892356d83d810ec9171d1bbdc6635a0fee69c732] <==
	I0422 18:30:00.301732       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:30:00.371486       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.114"]
	I0422 18:30:00.536057       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:30:00.536100       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:30:00.536117       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:30:00.540398       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:30:00.540667       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:30:00.540710       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:30:00.542303       1 config.go:192] "Starting service config controller"
	I0422 18:30:00.542318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:30:00.542345       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:30:00.542349       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:30:00.542879       1 config.go:319] "Starting node config controller"
	I0422 18:30:00.542889       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:30:00.642956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:30:00.643064       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:30:00.643310       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [081ed6bcbd5ca18e0b7e8fa3a53299ca1234d69ab6c07bbf9f71f2556f3523d9] <==
	W0422 18:29:42.223553       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:29:42.223592       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 18:29:43.064481       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 18:29:43.064535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 18:29:43.084215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 18:29:43.084273       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 18:29:43.135334       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:29:43.135446       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:29:43.231247       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:29:43.231364       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 18:29:43.270993       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 18:29:43.271058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 18:29:43.334255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 18:29:43.334309       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 18:29:43.334362       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:29:43.334372       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:29:43.372723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:29:43.372814       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:29:43.372862       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:29:43.372870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 18:29:43.391603       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 18:29:43.391662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 18:29:43.418254       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 18:29:43.418306       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0422 18:29:45.303530       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:42:45 embed-certs-782377 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:42:45 embed-certs-782377 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:42:52 embed-certs-782377 kubelet[3905]: E0422 18:42:52.998271    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:43:04 embed-certs-782377 kubelet[3905]: E0422 18:43:04.999383    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:43:17 embed-certs-782377 kubelet[3905]: E0422 18:43:17.998870    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:43:29 embed-certs-782377 kubelet[3905]: E0422 18:43:29.997107    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:43:44 embed-certs-782377 kubelet[3905]: E0422 18:43:44.999557    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:43:45 embed-certs-782377 kubelet[3905]: E0422 18:43:45.028177    3905 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:43:45 embed-certs-782377 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:43:45 embed-certs-782377 kubelet[3905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:43:45 embed-certs-782377 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:43:45 embed-certs-782377 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:43:56 embed-certs-782377 kubelet[3905]: E0422 18:43:56.998190    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:44:09 embed-certs-782377 kubelet[3905]: E0422 18:44:09.997519    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:44:23 embed-certs-782377 kubelet[3905]: E0422 18:44:23.997538    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:44:38 embed-certs-782377 kubelet[3905]: E0422 18:44:38.998717    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:44:45 embed-certs-782377 kubelet[3905]: E0422 18:44:45.030177    3905 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:44:45 embed-certs-782377 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:44:45 embed-certs-782377 kubelet[3905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:44:45 embed-certs-782377 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:44:45 embed-certs-782377 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:44:51 embed-certs-782377 kubelet[3905]: E0422 18:44:51.998182    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:45:06 embed-certs-782377 kubelet[3905]: E0422 18:45:06.998481    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:45:18 embed-certs-782377 kubelet[3905]: E0422 18:45:18.997578    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	Apr 22 18:45:29 embed-certs-782377 kubelet[3905]: E0422 18:45:29.998307    3905 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-lv49p" podUID="e99119a1-18ac-4ce8-ab9d-5cbbeddc243b"
	
	
	==> storage-provisioner [c0185f4d38b0254157031213f6848e3bbb64cd7440bb6ff3dcc24765b28e2756] <==
	I0422 18:30:00.772376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:30:00.791396       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:30:00.791465       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:30:00.840872       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:30:00.841251       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-782377_d0c4b64e-30dc-4fc5-9911-6e54bec8a68a!
	I0422 18:30:00.845040       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c42af0d-a36f-47e6-9d2c-00802569f696", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-782377_d0c4b64e-30dc-4fc5-9911-6e54bec8a68a became leader
	I0422 18:30:00.941911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-782377_d0c4b64e-30dc-4fc5-9911-6e54bec8a68a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-782377 -n embed-certs-782377
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-782377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-lv49p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-782377 describe pod metrics-server-569cc877fc-lv49p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-782377 describe pod metrics-server-569cc877fc-lv49p: exit status 1 (67.888188ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-lv49p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-782377 describe pod metrics-server-569cc877fc-lv49p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (384.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (436.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-22 18:46:40.940799027 +0000 UTC m=+6585.670412635
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-856422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.268µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-856422 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-856422 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-856422 logs -n 25: (1.390047795s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC | 22 Apr 24 18:45 UTC |
	| start   | -p newest-cni-505212 --memory=2200 --alsologtostderr   | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC | 22 Apr 24 18:46 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC | 22 Apr 24 18:45 UTC |
	| delete  | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC | 22 Apr 24 18:45 UTC |
	| addons  | enable metrics-server -p newest-cni-505212             | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:46 UTC | 22 Apr 24 18:46 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-505212                                   | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:46 UTC | 22 Apr 24 18:46 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-505212                  | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:46 UTC | 22 Apr 24 18:46 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-505212 --memory=2200 --alsologtostderr   | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:46 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:46:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:46:22.436940   85483 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:46:22.437064   85483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:46:22.437073   85483 out.go:304] Setting ErrFile to fd 2...
	I0422 18:46:22.437077   85483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:46:22.437270   85483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:46:22.437789   85483 out.go:298] Setting JSON to false
	I0422 18:46:22.438716   85483 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8928,"bootTime":1713802655,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:46:22.438794   85483 start.go:139] virtualization: kvm guest
	I0422 18:46:22.441192   85483 out.go:177] * [newest-cni-505212] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:46:22.442708   85483 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:46:22.442757   85483 notify.go:220] Checking for updates...
	I0422 18:46:22.445971   85483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:46:22.447779   85483 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:46:22.449233   85483 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:46:22.450894   85483 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:46:22.451849   85483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:46:22.453722   85483 config.go:182] Loaded profile config "newest-cni-505212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:46:22.454416   85483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:46:22.454471   85483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:46:22.470779   85483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36941
	I0422 18:46:22.471210   85483 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:46:22.471742   85483 main.go:141] libmachine: Using API Version  1
	I0422 18:46:22.471768   85483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:46:22.472226   85483 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:46:22.472454   85483 main.go:141] libmachine: (newest-cni-505212) Calling .DriverName
	I0422 18:46:22.472745   85483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:46:22.473025   85483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:46:22.473063   85483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:46:22.487981   85483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34817
	I0422 18:46:22.488392   85483 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:46:22.488946   85483 main.go:141] libmachine: Using API Version  1
	I0422 18:46:22.488984   85483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:46:22.489274   85483 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:46:22.489496   85483 main.go:141] libmachine: (newest-cni-505212) Calling .DriverName
	I0422 18:46:22.526822   85483 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:46:22.528302   85483 start.go:297] selected driver: kvm2
	I0422 18:46:22.528317   85483 start.go:901] validating driver "kvm2" against &{Name:newest-cni-505212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:newest-cni-505212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.118 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:46:22.528489   85483 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:46:22.529341   85483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:46:22.529403   85483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:46:22.544517   85483 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:46:22.544886   85483 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0422 18:46:22.544945   85483 cni.go:84] Creating CNI manager for ""
	I0422 18:46:22.544958   85483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:46:22.545003   85483 start.go:340] cluster config:
	{Name:newest-cni-505212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-505212 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.118 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:46:22.545106   85483 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:46:22.546831   85483 out.go:177] * Starting "newest-cni-505212" primary control-plane node in "newest-cni-505212" cluster
	I0422 18:46:22.548198   85483 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:46:22.548228   85483 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:46:22.548237   85483 cache.go:56] Caching tarball of preloaded images
	I0422 18:46:22.548302   85483 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:46:22.548312   85483 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:46:22.548400   85483 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/newest-cni-505212/config.json ...
	I0422 18:46:22.548567   85483 start.go:360] acquireMachinesLock for newest-cni-505212: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:46:22.548616   85483 start.go:364] duration metric: took 27.308µs to acquireMachinesLock for "newest-cni-505212"
	I0422 18:46:22.548632   85483 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:46:22.548641   85483 fix.go:54] fixHost starting: 
	I0422 18:46:22.549006   85483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:46:22.549038   85483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:46:22.563047   85483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41671
	I0422 18:46:22.563527   85483 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:46:22.564212   85483 main.go:141] libmachine: Using API Version  1
	I0422 18:46:22.564263   85483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:46:22.564715   85483 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:46:22.564990   85483 main.go:141] libmachine: (newest-cni-505212) Calling .DriverName
	I0422 18:46:22.565176   85483 main.go:141] libmachine: (newest-cni-505212) Calling .GetState
	I0422 18:46:22.566917   85483 fix.go:112] recreateIfNeeded on newest-cni-505212: state=Stopped err=<nil>
	I0422 18:46:22.566939   85483 main.go:141] libmachine: (newest-cni-505212) Calling .DriverName
	W0422 18:46:22.567113   85483 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:46:22.569262   85483 out.go:177] * Restarting existing kvm2 VM for "newest-cni-505212" ...
	I0422 18:46:22.570810   85483 main.go:141] libmachine: (newest-cni-505212) Calling .Start
	I0422 18:46:22.570981   85483 main.go:141] libmachine: (newest-cni-505212) Ensuring networks are active...
	I0422 18:46:22.571758   85483 main.go:141] libmachine: (newest-cni-505212) Ensuring network default is active
	I0422 18:46:22.572237   85483 main.go:141] libmachine: (newest-cni-505212) Ensuring network mk-newest-cni-505212 is active
	I0422 18:46:22.572565   85483 main.go:141] libmachine: (newest-cni-505212) Getting domain xml...
	I0422 18:46:22.573201   85483 main.go:141] libmachine: (newest-cni-505212) Creating domain...
	I0422 18:46:23.794256   85483 main.go:141] libmachine: (newest-cni-505212) Waiting to get IP...
	I0422 18:46:23.795353   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:23.795838   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:23.795948   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:23.795825   85518 retry.go:31] will retry after 202.523034ms: waiting for machine to come up
	I0422 18:46:24.000543   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:24.001125   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:24.001165   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:24.001099   85518 retry.go:31] will retry after 327.48612ms: waiting for machine to come up
	I0422 18:46:24.330745   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:24.331174   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:24.331198   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:24.331116   85518 retry.go:31] will retry after 390.36798ms: waiting for machine to come up
	I0422 18:46:24.722638   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:24.723174   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:24.723192   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:24.723102   85518 retry.go:31] will retry after 433.415641ms: waiting for machine to come up
	I0422 18:46:25.157682   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:25.158093   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:25.158121   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:25.158042   85518 retry.go:31] will retry after 643.021793ms: waiting for machine to come up
	I0422 18:46:25.802892   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:25.803322   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:25.803351   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:25.803269   85518 retry.go:31] will retry after 636.43697ms: waiting for machine to come up
	I0422 18:46:26.441011   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:26.441454   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:26.441481   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:26.441415   85518 retry.go:31] will retry after 729.429756ms: waiting for machine to come up
	I0422 18:46:27.172214   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:27.172610   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:27.172638   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:27.172557   85518 retry.go:31] will retry after 1.140582249s: waiting for machine to come up
	I0422 18:46:28.314655   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:28.315183   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:28.315212   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:28.315128   85518 retry.go:31] will retry after 1.598647656s: waiting for machine to come up
	I0422 18:46:29.915078   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:29.915597   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:29.915623   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:29.915553   85518 retry.go:31] will retry after 1.66327379s: waiting for machine to come up
	I0422 18:46:31.581400   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:31.581936   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:31.581984   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:31.581866   85518 retry.go:31] will retry after 1.931057496s: waiting for machine to come up
	I0422 18:46:33.514079   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:33.514652   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:33.514689   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:33.514578   85518 retry.go:31] will retry after 3.175319019s: waiting for machine to come up
	I0422 18:46:36.692636   85483 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:46:36.693130   85483 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:46:36.693165   85483 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:46:36.693072   85518 retry.go:31] will retry after 3.586471084s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.616534613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811601616510083,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ebcea4d3-f2ea-4fbb-982c-a735444de6e3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.617297187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea0b1118-3890-4444-b16c-14a5998bc0b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.617349477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea0b1118-3890-4444-b16c-14a5998bc0b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.617578947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea0b1118-3890-4444-b16c-14a5998bc0b8 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.657365195Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e72e504-d177-4997-8150-24b7d4639cb0 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.657466081Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e72e504-d177-4997-8150-24b7d4639cb0 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.658499878Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7699458-22c8-4050-b6fb-0ebc4c3dd5ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.658912649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811601658888880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7699458-22c8-4050-b6fb-0ebc4c3dd5ea name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.659575356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd8b6c11-a42f-4d24-b6c2-bffaf943d8e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.659645198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd8b6c11-a42f-4d24-b6c2-bffaf943d8e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.659870353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd8b6c11-a42f-4d24-b6c2-bffaf943d8e2 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.703478552Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5dcfca1-89f8-4a68-b9e8-69f4438f5b61 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.703556128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5dcfca1-89f8-4a68-b9e8-69f4438f5b61 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.704904524Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4651a65c-3169-48f4-99bc-2ce78ccade31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.705562049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811601705537515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4651a65c-3169-48f4-99bc-2ce78ccade31 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.706278356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4448dee-b7b6-4113-a016-c770c412c862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.706354634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4448dee-b7b6-4113-a016-c770c412c862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.706605077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4448dee-b7b6-4113-a016-c770c412c862 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.746420788Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d512a618-53e3-47c0-b092-c9e973582051 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.746580751Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d512a618-53e3-47c0-b092-c9e973582051 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.748767609Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97145d99-989a-43dd-a930-408a1c8a1ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.749362900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811601749329490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133261,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97145d99-989a-43dd-a930-408a1c8a1ff0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.750128151Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c5e1452-9899-49b5-9a60-a9dac6622095 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.750216978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c5e1452-9899-49b5-9a60-a9dac6622095 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:46:41 default-k8s-diff-port-856422 crio[721]: time="2024-04-22 18:46:41.750449237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633,PodSandboxId:2b37946810279b4718fbf266fd4c72d84c8f6c8ba407175a2041f55b73f4100c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810619918779634,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9998f3b2-a39c-4b2c-a7c2-f02aec08f548,},Annotations:map[string]string{io.kubernetes.container.hash: b1399267,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea,PodSandboxId:1c76f6957c5237efaf8efe1421ef4db5350754b3bb24cd7c5254bbebf6819d78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619347994133,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vc6vz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a7134db-ac2b-49d9-ab61-b4acd6ab4d67,},Annotations:map[string]string{io.kubernetes.container.hash: d38257ef,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b,PodSandboxId:5fb5d022981c93fb6283a11ab43c74fe5b4949e6d0d5313b58d4402af97ba73d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810619140515516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-jg8h6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 031f1940-ae96-44ae-a69c-ea0bbdce81fb,},Annotations:map[string]string{io.kubernetes.container.hash: 386bbe68,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986,PodSandboxId:ef5982a75f623fb473d9b16d7f2166f1c545908b46b29a54c2e069b7b3ce8f87,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING
,CreatedAt:1713810618413911084,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4m8cm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0673173-2469-4cef-9bef-1bee7504559c,},Annotations:map[string]string{io.kubernetes.container.hash: 5915540f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac,PodSandboxId:f475b95b1aca6251a6709fb58c64cff551be18b53557f6b44ac27fdf856039de,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:171381059761344796
6,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f90176445cd3959e25174c08c1688c45,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462,PodSandboxId:1b498a2ed492de714075b782a2b09dc791305f9a7f855990ab8cfdb24f3396e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810597577871136,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f3621ba1fcbb888b66b3d2a075e4fa1,},Annotations:map[string]string{io.kubernetes.container.hash: 74346ef5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445,PodSandboxId:36f42c5c15adb5f4f20a6d2c7d0770f327928c34f1b606aa66433ea8f233f67f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810597541733135,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff0e65cc4308339ea8fadc15bcfa2684,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb,PodSandboxId:c52878a5f3ab117905664f5794275d5feb0c74f7c4d863c98d50bf550aabd0b9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810597498809875,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f,PodSandboxId:9dff4617c7e86539ce6538ff921fb43c87371214d9624d00f490129762fa3524,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_EXITED,CreatedAt:1713810305034448866,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-856422,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5579cb4c8bced1b607425c27b729efcf,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca0e747,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c5e1452-9899-49b5-9a60-a9dac6622095 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7ee4eac8d0dfa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   2b37946810279       storage-provisioner
	abf55b7ba4ed6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   1c76f6957c523       coredns-7db6d8ff4d-vc6vz
	39ab7d17fd2ea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   5fb5d022981c9       coredns-7db6d8ff4d-jg8h6
	e08675236130d       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   16 minutes ago      Running             kube-proxy                0                   ef5982a75f623       kube-proxy-4m8cm
	3d96267bdd14c       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   16 minutes ago      Running             kube-scheduler            2                   f475b95b1aca6       kube-scheduler-default-k8s-diff-port-856422
	2532288e8ed99       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   16 minutes ago      Running             etcd                      2                   1b498a2ed492d       etcd-default-k8s-diff-port-856422
	5e4ca3cad7be0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   16 minutes ago      Running             kube-controller-manager   2                   36f42c5c15adb       kube-controller-manager-default-k8s-diff-port-856422
	2540e6dbfeb70       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   16 minutes ago      Running             kube-apiserver            2                   c52878a5f3ab1       kube-apiserver-default-k8s-diff-port-856422
	fdb735d23867d       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   21 minutes ago      Exited              kube-apiserver            1                   9dff4617c7e86       kube-apiserver-default-k8s-diff-port-856422
	
	
	==> coredns [39ab7d17fd2ea8ad05a37430140216995d284dcf3241879499490a2205d1716b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [abf55b7ba4ed6a318aad811510ebd02e1a54bf9b9a14e7e0f8ed22daace6c9ea] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-856422
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-856422
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=default-k8s-diff-port-856422
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:30:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-856422
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:45:41 +0000   Mon, 22 Apr 2024 18:29:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:45:41 +0000   Mon, 22 Apr 2024 18:29:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:45:41 +0000   Mon, 22 Apr 2024 18:29:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:45:41 +0000   Mon, 22 Apr 2024 18:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.206
	  Hostname:    default-k8s-diff-port-856422
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3bc25147b44b4422871f3fb405e24b9c
	  System UUID:                3bc25147-b44b-4422-871f-3fb405e24b9c
	  Boot ID:                    af94f6ce-ea73-4043-b56f-415b0dd034ee
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jg8h6                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-vc6vz                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-856422                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-856422             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-856422    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-4m8cm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-856422             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-569cc877fc-jmdnk                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-856422 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-856422 event: Registered Node default-k8s-diff-port-856422 in Controller
	
	
	==> dmesg <==
	[  +0.052360] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041965] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.716838] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.852105] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.513881] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.948521] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.064468] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073789] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.219845] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.147076] systemd-fstab-generator[676]: Ignoring "noauto" option for root device
	[  +0.312900] systemd-fstab-generator[705]: Ignoring "noauto" option for root device
	[Apr22 18:25] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.063319] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.230304] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +5.617553] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.558909] kauditd_printk_skb: 79 callbacks suppressed
	[Apr22 18:29] systemd-fstab-generator[3592]: Ignoring "noauto" option for root device
	[  +0.068056] kauditd_printk_skb: 9 callbacks suppressed
	[Apr22 18:30] systemd-fstab-generator[3908]: Ignoring "noauto" option for root device
	[  +0.080586] kauditd_printk_skb: 54 callbacks suppressed
	[ +14.871170] systemd-fstab-generator[4121]: Ignoring "noauto" option for root device
	[  +0.113616] kauditd_printk_skb: 12 callbacks suppressed
	[Apr22 18:31] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [2532288e8ed99609627339efd8aca2b2335d42a26d2a9309b453405275e76462] <==
	{"level":"info","ts":"2024-04-22T18:29:58.272031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:58.27227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:58.272303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 received MsgPreVoteResp from 4b284f151a3a3636 at term 1"}
	{"level":"info","ts":"2024-04-22T18:29:58.272399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.272428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 received MsgVoteResp from 4b284f151a3a3636 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.272522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4b284f151a3a3636 became leader at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.272558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4b284f151a3a3636 elected leader 4b284f151a3a3636 at term 2"}
	{"level":"info","ts":"2024-04-22T18:29:58.277321Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.277981Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:58.284352Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d466202ffa4fc203","local-member-id":"4b284f151a3a3636","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.28445Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.284495Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:29:58.284514Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:29:58.277907Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"4b284f151a3a3636","local-member-attributes":"{Name:default-k8s-diff-port-856422 ClientURLs:[https://192.168.61.206:2379]}","request-path":"/0/members/4b284f151a3a3636/attributes","cluster-id":"d466202ffa4fc203","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:29:58.301534Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.206:2379"}
	{"level":"info","ts":"2024-04-22T18:29:58.301663Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:58.301694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:29:58.306043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:39:58.522818Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":715}
	{"level":"info","ts":"2024-04-22T18:39:58.53223Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":715,"took":"8.886141ms","hash":3040160546,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2306048,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-22T18:39:58.532301Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3040160546,"revision":715,"compact-revision":-1}
	{"level":"info","ts":"2024-04-22T18:44:58.530736Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":958}
	{"level":"info","ts":"2024-04-22T18:44:58.53535Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":958,"took":"4.161369ms","hash":999166526,"current-db-size-bytes":2306048,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-22T18:44:58.535421Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":999166526,"revision":958,"compact-revision":715}
	{"level":"info","ts":"2024-04-22T18:45:45.456395Z","caller":"traceutil/trace.go:171","msg":"trace[1280442144] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"108.463696ms","start":"2024-04-22T18:45:45.347803Z","end":"2024-04-22T18:45:45.456267Z","steps":["trace[1280442144] 'process raft request'  (duration: 107.909401ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:46:42 up 22 min,  0 users,  load average: 0.18, 0.23, 0.20
	Linux default-k8s-diff-port-856422 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2540e6dbfeb706110d1cd6ab7670ca60246dec63f04ce92204ffb82ab9ceffbb] <==
	I0422 18:41:01.449100       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:43:01.448836       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:43:01.449232       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:43:01.449270       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:43:01.449316       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:43:01.449423       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:43:01.451321       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:45:00.454018       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:45:00.454144       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0422 18:45:01.455205       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:45:01.455333       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:45:01.455361       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:45:01.455430       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:45:01.455496       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:45:01.456787       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:46:01.456148       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:46:01.456244       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:46:01.456262       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:46:01.457493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:46:01.457567       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:46:01.457575       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fdb735d23867deb347ecbbee74abab2f9673867362e4af7304439b270334b71f] <==
	W0422 18:29:51.908687       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:51.960417       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:51.994852       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.014495       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.073167       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.086811       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.178745       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.216519       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.219256       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.250598       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.278868       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.281463       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.350220       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.364651       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.364669       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.379135       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.433200       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.533018       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.570647       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.596429       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.610486       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.679603       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:52.789789       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:53.316138       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0422 18:29:53.359707       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5e4ca3cad7be0675b5f3f988e8bd67dda8ddcb284749454a8978f1559dfad445] <==
	E0422 18:41:17.906684       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:41:18.379598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:41:22.183845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="191.904µs"
	I0422 18:41:37.182806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="83.324µs"
	E0422 18:41:47.915006       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:41:48.389139       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:42:17.920607       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:42:18.397409       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:42:47.925470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:42:48.406524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:43:17.930666       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:43:18.420075       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:43:47.934709       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:43:48.428640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:44:17.940181       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:44:18.438549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:44:47.947075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:44:48.447463       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:45:17.952484       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:45:18.457444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:45:47.959771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:45:48.466983       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:46:17.964911       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:46:18.475561       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:46:29.184332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="188.674µs"
	
	
	==> kube-proxy [e08675236130d6a4254000d7e1d956995658dbff2141d9822c41a735e9f30986] <==
	I0422 18:30:18.849498       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:30:18.875885       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.206"]
	I0422 18:30:18.961627       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:30:18.961689       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:30:18.961710       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:30:18.964822       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:30:18.965141       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:30:18.965165       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:30:18.966787       1 config.go:192] "Starting service config controller"
	I0422 18:30:18.966827       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:30:18.966853       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:30:18.966858       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:30:18.972134       1 config.go:319] "Starting node config controller"
	I0422 18:30:18.972171       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:30:19.067039       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:30:19.067101       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:30:19.072713       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3d96267bdd14c5a7c1cd1758a06e4387d56246fef42a36167eb4098d86faa1ac] <==
	W0422 18:30:00.490454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 18:30:00.490491       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 18:30:00.490518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:30:00.490544       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 18:30:00.491829       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:30:00.491888       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:30:01.417490       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:01.417556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:01.457226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 18:30:01.457281       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 18:30:01.545649       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:01.545707       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:01.552879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 18:30:01.553000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 18:30:01.569179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:30:01.569241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 18:30:01.739732       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:01.739900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:01.739976       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:30:01.740058       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:30:01.780896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:30:01.781022       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:30:01.787040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 18:30:01.787143       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0422 18:30:03.890046       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:44:08 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:44:08.163877    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:44:19 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:44:19.165576    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:44:30 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:44:30.164465    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:44:44 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:44:44.164341    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:44:57 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:44:57.164565    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:45:03 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:45:03.188343    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:45:03 default-k8s-diff-port-856422 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:45:03 default-k8s-diff-port-856422 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:45:03 default-k8s-diff-port-856422 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:45:03 default-k8s-diff-port-856422 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:45:08 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:45:08.165394    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:45:20 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:45:20.164524    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:45:33 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:45:33.166251    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:45:48 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:45:48.164188    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:46:00 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:00.164654    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:46:03 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:03.186590    3915 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:46:03 default-k8s-diff-port-856422 kubelet[3915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:46:03 default-k8s-diff-port-856422 kubelet[3915]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:46:03 default-k8s-diff-port-856422 kubelet[3915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:46:03 default-k8s-diff-port-856422 kubelet[3915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:46:14 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:14.179780    3915 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 22 18:46:14 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:14.180215    3915 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 22 18:46:14 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:14.181558    3915 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lgsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-jmdnk_kube-system(54d9a335-db4a-417d-9909-256d3a2b7fd0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 22 18:46:14 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:14.181739    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	Apr 22 18:46:29 default-k8s-diff-port-856422 kubelet[3915]: E0422 18:46:29.166267    3915 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jmdnk" podUID="54d9a335-db4a-417d-9909-256d3a2b7fd0"
	
	
	==> storage-provisioner [7ee4eac8d0dfa44791eb03e85e04f6f230b49d8ca09bf5ddd6fc1f968386a633] <==
	I0422 18:30:20.047257       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:30:20.067226       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:30:20.067310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:30:20.089915       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:30:20.090314       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-856422_13ff8122-b447-4862-9058-e11fab20460d!
	I0422 18:30:20.090582       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c82121e4-0669-4a12-a537-ff70e2307a04", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-856422_13ff8122-b447-4862-9058-e11fab20460d became leader
	I0422 18:30:20.190579       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-856422_13ff8122-b447-4862-9058-e11fab20460d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jmdnk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 describe pod metrics-server-569cc877fc-jmdnk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-856422 describe pod metrics-server-569cc877fc-jmdnk: exit status 1 (64.389528ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jmdnk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-856422 describe pod metrics-server-569cc877fc-jmdnk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (436.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (308.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407991 -n no-preload-407991
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-22 18:45:26.258157771 +0000 UTC m=+6510.987771372
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-407991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-407991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.017µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-407991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-407991 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-407991 logs -n 25: (1.280804847s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC | 22 Apr 24 18:45 UTC |
	| start   | -p newest-cni-505212 --memory=2200 --alsologtostderr   | newest-cni-505212            | jenkins | v1.33.0 | 22 Apr 24 18:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:45:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:45:10.112916   84518 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:45:10.113190   84518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:45:10.113203   84518 out.go:304] Setting ErrFile to fd 2...
	I0422 18:45:10.113209   84518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:45:10.113476   84518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:45:10.114094   84518 out.go:298] Setting JSON to false
	I0422 18:45:10.115020   84518 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8855,"bootTime":1713802655,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:45:10.115081   84518 start.go:139] virtualization: kvm guest
	I0422 18:45:10.117465   84518 out.go:177] * [newest-cni-505212] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:45:10.118909   84518 notify.go:220] Checking for updates...
	I0422 18:45:10.118917   84518 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:45:10.120251   84518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:45:10.121708   84518 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:45:10.123073   84518 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:45:10.124539   84518 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:45:10.125871   84518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:45:10.127954   84518 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:45:10.128103   84518 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:45:10.128239   84518 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:45:10.128384   84518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:45:10.167266   84518 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:45:10.168686   84518 start.go:297] selected driver: kvm2
	I0422 18:45:10.168708   84518 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:45:10.168735   84518 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:45:10.169596   84518 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:45:10.169660   84518 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:45:10.185116   84518 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:45:10.185194   84518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0422 18:45:10.185238   84518 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0422 18:45:10.185616   84518 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0422 18:45:10.185689   84518 cni.go:84] Creating CNI manager for ""
	I0422 18:45:10.185706   84518 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:45:10.185719   84518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 18:45:10.185811   84518 start.go:340] cluster config:
	{Name:newest-cni-505212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:newest-cni-505212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:45:10.185966   84518 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:45:10.187822   84518 out.go:177] * Starting "newest-cni-505212" primary control-plane node in "newest-cni-505212" cluster
	I0422 18:45:10.188993   84518 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:45:10.189038   84518 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:45:10.189054   84518 cache.go:56] Caching tarball of preloaded images
	I0422 18:45:10.189148   84518 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:45:10.189161   84518 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on crio
	I0422 18:45:10.189260   84518 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/newest-cni-505212/config.json ...
	I0422 18:45:10.189282   84518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/newest-cni-505212/config.json: {Name:mkd111e05cfb582e9c3b193258ee98577aa32be0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:45:10.189444   84518 start.go:360] acquireMachinesLock for newest-cni-505212: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:45:10.189485   84518 start.go:364] duration metric: took 23.945µs to acquireMachinesLock for "newest-cni-505212"
	I0422 18:45:10.189507   84518 start.go:93] Provisioning new machine with config: &{Name:newest-cni-505212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:newest-cni-505212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:45:10.189621   84518 start.go:125] createHost starting for "" (driver="kvm2")
	I0422 18:45:10.191272   84518 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0422 18:45:10.191442   84518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:45:10.191495   84518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:45:10.206981   84518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40341
	I0422 18:45:10.207438   84518 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:45:10.208114   84518 main.go:141] libmachine: Using API Version  1
	I0422 18:45:10.208141   84518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:45:10.208494   84518 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:45:10.208685   84518 main.go:141] libmachine: (newest-cni-505212) Calling .GetMachineName
	I0422 18:45:10.208826   84518 main.go:141] libmachine: (newest-cni-505212) Calling .DriverName
	I0422 18:45:10.209044   84518 start.go:159] libmachine.API.Create for "newest-cni-505212" (driver="kvm2")
	I0422 18:45:10.209125   84518 client.go:168] LocalClient.Create starting
	I0422 18:45:10.209158   84518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem
	I0422 18:45:10.209190   84518 main.go:141] libmachine: Decoding PEM data...
	I0422 18:45:10.209208   84518 main.go:141] libmachine: Parsing certificate...
	I0422 18:45:10.209265   84518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem
	I0422 18:45:10.209289   84518 main.go:141] libmachine: Decoding PEM data...
	I0422 18:45:10.209300   84518 main.go:141] libmachine: Parsing certificate...
	I0422 18:45:10.209313   84518 main.go:141] libmachine: Running pre-create checks...
	I0422 18:45:10.209321   84518 main.go:141] libmachine: (newest-cni-505212) Calling .PreCreateCheck
	I0422 18:45:10.209759   84518 main.go:141] libmachine: (newest-cni-505212) Calling .GetConfigRaw
	I0422 18:45:10.210188   84518 main.go:141] libmachine: Creating machine...
	I0422 18:45:10.210206   84518 main.go:141] libmachine: (newest-cni-505212) Calling .Create
	I0422 18:45:10.210337   84518 main.go:141] libmachine: (newest-cni-505212) Creating KVM machine...
	I0422 18:45:10.211724   84518 main.go:141] libmachine: (newest-cni-505212) DBG | found existing default KVM network
	I0422 18:45:10.212958   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.212765   84540 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:af:3f:3b} reservation:<nil>}
	I0422 18:45:10.213804   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.213715   84540 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:03:c9} reservation:<nil>}
	I0422 18:45:10.214661   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.214591   84540 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:9c:60:80} reservation:<nil>}
	I0422 18:45:10.215723   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.215636   84540 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289890}
	I0422 18:45:10.215794   84518 main.go:141] libmachine: (newest-cni-505212) DBG | created network xml: 
	I0422 18:45:10.215818   84518 main.go:141] libmachine: (newest-cni-505212) DBG | <network>
	I0422 18:45:10.215849   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   <name>mk-newest-cni-505212</name>
	I0422 18:45:10.215910   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   <dns enable='no'/>
	I0422 18:45:10.215926   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   
	I0422 18:45:10.215940   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0422 18:45:10.215951   84518 main.go:141] libmachine: (newest-cni-505212) DBG |     <dhcp>
	I0422 18:45:10.215983   84518 main.go:141] libmachine: (newest-cni-505212) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0422 18:45:10.215999   84518 main.go:141] libmachine: (newest-cni-505212) DBG |     </dhcp>
	I0422 18:45:10.216007   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   </ip>
	I0422 18:45:10.216029   84518 main.go:141] libmachine: (newest-cni-505212) DBG |   
	I0422 18:45:10.216046   84518 main.go:141] libmachine: (newest-cni-505212) DBG | </network>
	I0422 18:45:10.216054   84518 main.go:141] libmachine: (newest-cni-505212) DBG | 
	I0422 18:45:10.221511   84518 main.go:141] libmachine: (newest-cni-505212) DBG | trying to create private KVM network mk-newest-cni-505212 192.168.72.0/24...
	I0422 18:45:10.295286   84518 main.go:141] libmachine: (newest-cni-505212) DBG | private KVM network mk-newest-cni-505212 192.168.72.0/24 created
	I0422 18:45:10.295322   84518 main.go:141] libmachine: (newest-cni-505212) Setting up store path in /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212 ...
	I0422 18:45:10.295347   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.295265   84540 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:45:10.295365   84518 main.go:141] libmachine: (newest-cni-505212) Building disk image from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 18:45:10.295471   84518 main.go:141] libmachine: (newest-cni-505212) Downloading /home/jenkins/minikube-integration/18706-11572/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0422 18:45:10.524206   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.524099   84540 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/id_rsa...
	I0422 18:45:10.777495   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.777323   84540 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/newest-cni-505212.rawdisk...
	I0422 18:45:10.777552   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Writing magic tar header
	I0422 18:45:10.777615   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Writing SSH key tar header
	I0422 18:45:10.777661   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:10.777483   84540 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212 ...
	I0422 18:45:10.777680   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212 (perms=drwx------)
	I0422 18:45:10.777704   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube/machines (perms=drwxr-xr-x)
	I0422 18:45:10.777715   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572/.minikube (perms=drwxr-xr-x)
	I0422 18:45:10.777732   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration/18706-11572 (perms=drwxrwxr-x)
	I0422 18:45:10.777745   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212
	I0422 18:45:10.777753   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0422 18:45:10.777767   84518 main.go:141] libmachine: (newest-cni-505212) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0422 18:45:10.777779   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube/machines
	I0422 18:45:10.777787   84518 main.go:141] libmachine: (newest-cni-505212) Creating domain...
	I0422 18:45:10.777797   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:45:10.777805   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18706-11572
	I0422 18:45:10.777811   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0422 18:45:10.777819   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home/jenkins
	I0422 18:45:10.777825   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Checking permissions on dir: /home
	I0422 18:45:10.777833   84518 main.go:141] libmachine: (newest-cni-505212) DBG | Skipping /home - not owner
	I0422 18:45:10.778938   84518 main.go:141] libmachine: (newest-cni-505212) define libvirt domain using xml: 
	I0422 18:45:10.778961   84518 main.go:141] libmachine: (newest-cni-505212) <domain type='kvm'>
	I0422 18:45:10.778968   84518 main.go:141] libmachine: (newest-cni-505212)   <name>newest-cni-505212</name>
	I0422 18:45:10.778973   84518 main.go:141] libmachine: (newest-cni-505212)   <memory unit='MiB'>2200</memory>
	I0422 18:45:10.778981   84518 main.go:141] libmachine: (newest-cni-505212)   <vcpu>2</vcpu>
	I0422 18:45:10.778991   84518 main.go:141] libmachine: (newest-cni-505212)   <features>
	I0422 18:45:10.779002   84518 main.go:141] libmachine: (newest-cni-505212)     <acpi/>
	I0422 18:45:10.779013   84518 main.go:141] libmachine: (newest-cni-505212)     <apic/>
	I0422 18:45:10.779021   84518 main.go:141] libmachine: (newest-cni-505212)     <pae/>
	I0422 18:45:10.779030   84518 main.go:141] libmachine: (newest-cni-505212)     
	I0422 18:45:10.779038   84518 main.go:141] libmachine: (newest-cni-505212)   </features>
	I0422 18:45:10.779043   84518 main.go:141] libmachine: (newest-cni-505212)   <cpu mode='host-passthrough'>
	I0422 18:45:10.779050   84518 main.go:141] libmachine: (newest-cni-505212)   
	I0422 18:45:10.779055   84518 main.go:141] libmachine: (newest-cni-505212)   </cpu>
	I0422 18:45:10.779062   84518 main.go:141] libmachine: (newest-cni-505212)   <os>
	I0422 18:45:10.779066   84518 main.go:141] libmachine: (newest-cni-505212)     <type>hvm</type>
	I0422 18:45:10.779074   84518 main.go:141] libmachine: (newest-cni-505212)     <boot dev='cdrom'/>
	I0422 18:45:10.779079   84518 main.go:141] libmachine: (newest-cni-505212)     <boot dev='hd'/>
	I0422 18:45:10.779087   84518 main.go:141] libmachine: (newest-cni-505212)     <bootmenu enable='no'/>
	I0422 18:45:10.779092   84518 main.go:141] libmachine: (newest-cni-505212)   </os>
	I0422 18:45:10.779099   84518 main.go:141] libmachine: (newest-cni-505212)   <devices>
	I0422 18:45:10.779110   84518 main.go:141] libmachine: (newest-cni-505212)     <disk type='file' device='cdrom'>
	I0422 18:45:10.779136   84518 main.go:141] libmachine: (newest-cni-505212)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/boot2docker.iso'/>
	I0422 18:45:10.779157   84518 main.go:141] libmachine: (newest-cni-505212)       <target dev='hdc' bus='scsi'/>
	I0422 18:45:10.779165   84518 main.go:141] libmachine: (newest-cni-505212)       <readonly/>
	I0422 18:45:10.779173   84518 main.go:141] libmachine: (newest-cni-505212)     </disk>
	I0422 18:45:10.779179   84518 main.go:141] libmachine: (newest-cni-505212)     <disk type='file' device='disk'>
	I0422 18:45:10.779187   84518 main.go:141] libmachine: (newest-cni-505212)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0422 18:45:10.779207   84518 main.go:141] libmachine: (newest-cni-505212)       <source file='/home/jenkins/minikube-integration/18706-11572/.minikube/machines/newest-cni-505212/newest-cni-505212.rawdisk'/>
	I0422 18:45:10.779215   84518 main.go:141] libmachine: (newest-cni-505212)       <target dev='hda' bus='virtio'/>
	I0422 18:45:10.779221   84518 main.go:141] libmachine: (newest-cni-505212)     </disk>
	I0422 18:45:10.779229   84518 main.go:141] libmachine: (newest-cni-505212)     <interface type='network'>
	I0422 18:45:10.779255   84518 main.go:141] libmachine: (newest-cni-505212)       <source network='mk-newest-cni-505212'/>
	I0422 18:45:10.779285   84518 main.go:141] libmachine: (newest-cni-505212)       <model type='virtio'/>
	I0422 18:45:10.779294   84518 main.go:141] libmachine: (newest-cni-505212)     </interface>
	I0422 18:45:10.779300   84518 main.go:141] libmachine: (newest-cni-505212)     <interface type='network'>
	I0422 18:45:10.779316   84518 main.go:141] libmachine: (newest-cni-505212)       <source network='default'/>
	I0422 18:45:10.779327   84518 main.go:141] libmachine: (newest-cni-505212)       <model type='virtio'/>
	I0422 18:45:10.779335   84518 main.go:141] libmachine: (newest-cni-505212)     </interface>
	I0422 18:45:10.779343   84518 main.go:141] libmachine: (newest-cni-505212)     <serial type='pty'>
	I0422 18:45:10.779351   84518 main.go:141] libmachine: (newest-cni-505212)       <target port='0'/>
	I0422 18:45:10.779358   84518 main.go:141] libmachine: (newest-cni-505212)     </serial>
	I0422 18:45:10.779364   84518 main.go:141] libmachine: (newest-cni-505212)     <console type='pty'>
	I0422 18:45:10.779371   84518 main.go:141] libmachine: (newest-cni-505212)       <target type='serial' port='0'/>
	I0422 18:45:10.779380   84518 main.go:141] libmachine: (newest-cni-505212)     </console>
	I0422 18:45:10.779387   84518 main.go:141] libmachine: (newest-cni-505212)     <rng model='virtio'>
	I0422 18:45:10.779411   84518 main.go:141] libmachine: (newest-cni-505212)       <backend model='random'>/dev/random</backend>
	I0422 18:45:10.779429   84518 main.go:141] libmachine: (newest-cni-505212)     </rng>
	I0422 18:45:10.779465   84518 main.go:141] libmachine: (newest-cni-505212)     
	I0422 18:45:10.779482   84518 main.go:141] libmachine: (newest-cni-505212)     
	I0422 18:45:10.779491   84518 main.go:141] libmachine: (newest-cni-505212)   </devices>
	I0422 18:45:10.779500   84518 main.go:141] libmachine: (newest-cni-505212) </domain>
	I0422 18:45:10.779511   84518 main.go:141] libmachine: (newest-cni-505212) 
	I0422 18:45:10.784393   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:e3:23:d8 in network default
	I0422 18:45:10.784977   84518 main.go:141] libmachine: (newest-cni-505212) Ensuring networks are active...
	I0422 18:45:10.785004   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:10.785773   84518 main.go:141] libmachine: (newest-cni-505212) Ensuring network default is active
	I0422 18:45:10.786215   84518 main.go:141] libmachine: (newest-cni-505212) Ensuring network mk-newest-cni-505212 is active
	I0422 18:45:10.786738   84518 main.go:141] libmachine: (newest-cni-505212) Getting domain xml...
	I0422 18:45:10.787599   84518 main.go:141] libmachine: (newest-cni-505212) Creating domain...
	I0422 18:45:12.040257   84518 main.go:141] libmachine: (newest-cni-505212) Waiting to get IP...
	I0422 18:45:12.041090   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:12.041604   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:12.041639   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:12.041591   84540 retry.go:31] will retry after 302.443308ms: waiting for machine to come up
	I0422 18:45:12.346125   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:12.346770   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:12.346795   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:12.346738   84540 retry.go:31] will retry after 336.383544ms: waiting for machine to come up
	I0422 18:45:12.684177   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:12.684660   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:12.684683   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:12.684626   84540 retry.go:31] will retry after 406.194746ms: waiting for machine to come up
	I0422 18:45:13.092322   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:13.092809   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:13.092833   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:13.092782   84540 retry.go:31] will retry after 382.460714ms: waiting for machine to come up
	I0422 18:45:13.477433   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:13.477908   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:13.477933   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:13.477856   84540 retry.go:31] will retry after 604.904054ms: waiting for machine to come up
	I0422 18:45:14.084786   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:14.085339   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:14.085381   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:14.085307   84540 retry.go:31] will retry after 943.058132ms: waiting for machine to come up
	I0422 18:45:15.029471   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:15.029948   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:15.029977   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:15.029891   84540 retry.go:31] will retry after 1.092745482s: waiting for machine to come up
	I0422 18:45:16.124142   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:16.124614   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:16.124641   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:16.124566   84540 retry.go:31] will retry after 1.247361176s: waiting for machine to come up
	I0422 18:45:17.373250   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:17.373690   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:17.373720   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:17.373649   84540 retry.go:31] will retry after 1.782608696s: waiting for machine to come up
	I0422 18:45:19.157944   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:19.158371   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:19.158414   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:19.158314   84540 retry.go:31] will retry after 1.833762676s: waiting for machine to come up
	I0422 18:45:20.994006   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:20.994603   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:20.994630   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:20.994549   84540 retry.go:31] will retry after 2.649935927s: waiting for machine to come up
	I0422 18:45:23.647372   84518 main.go:141] libmachine: (newest-cni-505212) DBG | domain newest-cni-505212 has defined MAC address 52:54:00:f7:ca:59 in network mk-newest-cni-505212
	I0422 18:45:23.647834   84518 main.go:141] libmachine: (newest-cni-505212) DBG | unable to find current IP address of domain newest-cni-505212 in network mk-newest-cni-505212
	I0422 18:45:23.647864   84518 main.go:141] libmachine: (newest-cni-505212) DBG | I0422 18:45:23.647793   84540 retry.go:31] will retry after 3.367112316s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.919017554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811526918998330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8fef3a8-80a6-4681-94e6-753104861e82 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.919897887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cad9ac24-c50f-4b8d-b1af-884b127757c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.919978556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cad9ac24-c50f-4b8d-b1af-884b127757c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.920232484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8,PodSandboxId:91405b7dfb5119be8e9ac5a920602aea5af70d0709f7704ff8d5a02dc133eca2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810673666135964,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c704413-c118-4a17-9a18-e13fd3c092f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d19f4df,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b,PodSandboxId:fa4acc3b1d07c6001a52b7f4a1d7ad1bc8c7a946cf485d31b6c704654563291e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672714755250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fclvg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2c4436-1941-4409-8a6b-5f377cb7212c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7ac21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f,PodSandboxId:44f96aef11a5613094bc33ac16065e7b27f7e9ee577dd9753ccc083f4b918f18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672599716328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9tt8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42
140aad-7ab4-4f46-9f24-0fc8717220f4,},Annotations:map[string]string{io.kubernetes.container.hash: aa57921c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,PodSandboxId:dcd0d87c5e1eccc31556bd38d9a68dfad992b8fa94ad8a2c65eda2e4ca824222,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810671697699613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,PodSandboxId:3a402124ae25d858d6345d163c57e1093b6e845c9d00edcbe25356650f5b7ad0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810651665369904,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,PodSandboxId:cbd798c4ad9e8d6f4dc7a0ad023c21512288aa2ecbdb534bbd5393857601528e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810651626695487,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,PodSandboxId:c0685cb27fc984b52e4394fcf8aecd91754cfd7ed90fbf0cec348ea765f5d646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810651567438831,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,PodSandboxId:cd282a65c6c517b7d02da5cf8d60979d5c90714b56f55e27605088be84ce376a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810651528983943,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cad9ac24-c50f-4b8d-b1af-884b127757c1 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.964489408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66b89c27-6b8a-4bae-847b-6ffc855e0fc5 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.964570233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66b89c27-6b8a-4bae-847b-6ffc855e0fc5 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.965890899Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a11d389e-4572-4c09-bd02-0a93b72f8e2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.966496494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811526966440850,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a11d389e-4572-4c09-bd02-0a93b72f8e2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.967386258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a72c5711-62d1-4416-9d41-f6ac86f177e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.967440228Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a72c5711-62d1-4416-9d41-f6ac86f177e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:26 no-preload-407991 crio[723]: time="2024-04-22 18:45:26.967620705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8,PodSandboxId:91405b7dfb5119be8e9ac5a920602aea5af70d0709f7704ff8d5a02dc133eca2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810673666135964,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c704413-c118-4a17-9a18-e13fd3c092f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d19f4df,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b,PodSandboxId:fa4acc3b1d07c6001a52b7f4a1d7ad1bc8c7a946cf485d31b6c704654563291e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672714755250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fclvg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2c4436-1941-4409-8a6b-5f377cb7212c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7ac21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f,PodSandboxId:44f96aef11a5613094bc33ac16065e7b27f7e9ee577dd9753ccc083f4b918f18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672599716328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9tt8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42
140aad-7ab4-4f46-9f24-0fc8717220f4,},Annotations:map[string]string{io.kubernetes.container.hash: aa57921c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,PodSandboxId:dcd0d87c5e1eccc31556bd38d9a68dfad992b8fa94ad8a2c65eda2e4ca824222,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810671697699613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,PodSandboxId:3a402124ae25d858d6345d163c57e1093b6e845c9d00edcbe25356650f5b7ad0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810651665369904,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,PodSandboxId:cbd798c4ad9e8d6f4dc7a0ad023c21512288aa2ecbdb534bbd5393857601528e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810651626695487,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,PodSandboxId:c0685cb27fc984b52e4394fcf8aecd91754cfd7ed90fbf0cec348ea765f5d646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810651567438831,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,PodSandboxId:cd282a65c6c517b7d02da5cf8d60979d5c90714b56f55e27605088be84ce376a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810651528983943,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a72c5711-62d1-4416-9d41-f6ac86f177e9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.018519235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aed92995-31ee-4543-8c6e-0bfb9aa26cf1 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.018600171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aed92995-31ee-4543-8c6e-0bfb9aa26cf1 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.021235184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a5bb669-94d5-4e31-9982-f368c668cdbc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.021617530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811527021593196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a5bb669-94d5-4e31-9982-f368c668cdbc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.022680195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea550c7c-d154-42b8-bcc3-55e167a14c68 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.022753635Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea550c7c-d154-42b8-bcc3-55e167a14c68 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.022947536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8,PodSandboxId:91405b7dfb5119be8e9ac5a920602aea5af70d0709f7704ff8d5a02dc133eca2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810673666135964,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c704413-c118-4a17-9a18-e13fd3c092f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d19f4df,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b,PodSandboxId:fa4acc3b1d07c6001a52b7f4a1d7ad1bc8c7a946cf485d31b6c704654563291e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672714755250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fclvg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2c4436-1941-4409-8a6b-5f377cb7212c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7ac21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f,PodSandboxId:44f96aef11a5613094bc33ac16065e7b27f7e9ee577dd9753ccc083f4b918f18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672599716328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9tt8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42
140aad-7ab4-4f46-9f24-0fc8717220f4,},Annotations:map[string]string{io.kubernetes.container.hash: aa57921c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,PodSandboxId:dcd0d87c5e1eccc31556bd38d9a68dfad992b8fa94ad8a2c65eda2e4ca824222,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810671697699613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,PodSandboxId:3a402124ae25d858d6345d163c57e1093b6e845c9d00edcbe25356650f5b7ad0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810651665369904,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,PodSandboxId:cbd798c4ad9e8d6f4dc7a0ad023c21512288aa2ecbdb534bbd5393857601528e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810651626695487,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,PodSandboxId:c0685cb27fc984b52e4394fcf8aecd91754cfd7ed90fbf0cec348ea765f5d646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810651567438831,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,PodSandboxId:cd282a65c6c517b7d02da5cf8d60979d5c90714b56f55e27605088be84ce376a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810651528983943,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea550c7c-d154-42b8-bcc3-55e167a14c68 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.064998767Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d40d4e3a-e677-481c-82d8-cf5d1995f612 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.065145305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d40d4e3a-e677-481c-82d8-cf5d1995f612 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.066430834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc07378f-5c59-4814-a74a-2b5635c1b1cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.066771126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811527066749899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99941,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc07378f-5c59-4814-a74a-2b5635c1b1cd name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.067372762Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e368ebc-358f-4a13-96f0-9ee3fc3d9f9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.067444630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e368ebc-358f-4a13-96f0-9ee3fc3d9f9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:27 no-preload-407991 crio[723]: time="2024-04-22 18:45:27.067645511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8,PodSandboxId:91405b7dfb5119be8e9ac5a920602aea5af70d0709f7704ff8d5a02dc133eca2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713810673666135964,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c704413-c118-4a17-9a18-e13fd3c092f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d19f4df,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b,PodSandboxId:fa4acc3b1d07c6001a52b7f4a1d7ad1bc8c7a946cf485d31b6c704654563291e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672714755250,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fclvg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e2c4436-1941-4409-8a6b-5f377cb7212c,},Annotations:map[string]string{io.kubernetes.container.hash: 4c7ac21e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f,PodSandboxId:44f96aef11a5613094bc33ac16065e7b27f7e9ee577dd9753ccc083f4b918f18,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713810672599716328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9tt8m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42
140aad-7ab4-4f46-9f24-0fc8717220f4,},Annotations:map[string]string{io.kubernetes.container.hash: aa57921c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef,PodSandboxId:dcd0d87c5e1eccc31556bd38d9a68dfad992b8fa94ad8a2c65eda2e4ca824222,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b,State:CONTAINER_RUNNING,CreatedAt:
1713810671697699613,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-47g8k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b0f8e68-3a4a-4863-85e7-a5bba444bc39,},Annotations:map[string]string{io.kubernetes.container.hash: cedf1680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea,PodSandboxId:3a402124ae25d858d6345d163c57e1093b6e845c9d00edcbe25356650f5b7ad0,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713810651665369904,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 59d40b5af9fb726dea1f435393c4f523,},Annotations:map[string]string{io.kubernetes.container.hash: 40c68c9e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc,PodSandboxId:cbd798c4ad9e8d6f4dc7a0ad023c21512288aa2ecbdb534bbd5393857601528e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced,State:CONTAINER_RUNNING,CreatedAt:1713810651626695487,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77beb6980eb3fa091e5fddc4154c0c31,},Annotations:map[string]string{io.kubernetes.container.hash: de199113,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07,PodSandboxId:c0685cb27fc984b52e4394fcf8aecd91754cfd7ed90fbf0cec348ea765f5d646,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0,State:CONTAINER_RUNNING,CreatedAt:1713810651567438831,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe8506f7beabb3b76305583423c6ad0,},Annotations:map[string]string{io.kubernetes.container.hash: 15ca256d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0,PodSandboxId:cd282a65c6c517b7d02da5cf8d60979d5c90714b56f55e27605088be84ce376a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b,State:CONTAINER_RUNNING,CreatedAt:1713810651528983943,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-407991,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7e5f7356814fb10b848064696e83862,},Annotations:map[string]string{io.kubernetes.container.hash: 933c3351,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e368ebc-358f-4a13-96f0-9ee3fc3d9f9d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cdad283db1c7f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   91405b7dfb511       storage-provisioner
	4b7f4e1e06ee2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   fa4acc3b1d07c       coredns-7db6d8ff4d-fclvg
	cab159e124934       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   44f96aef11a56       coredns-7db6d8ff4d-9tt8m
	e92f03b86edaa       a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b   14 minutes ago      Running             kube-proxy                0                   dcd0d87c5e1ec       kube-proxy-47g8k
	22caba79f3789       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   3a402124ae25d       etcd-no-preload-407991
	9ce2e44a81d88       259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced   14 minutes ago      Running             kube-scheduler            2                   cbd798c4ad9e8       kube-scheduler-no-preload-407991
	b532db71bb33f       c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0   14 minutes ago      Running             kube-apiserver            2                   c0685cb27fc98       kube-apiserver-no-preload-407991
	4e576823a82a0       c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b   14 minutes ago      Running             kube-controller-manager   2                   cd282a65c6c51       kube-controller-manager-no-preload-407991
	
	
	==> coredns [4b7f4e1e06ee219f7ca60a2fba93685cd7f77ca7f5688ec499ce4a2a94ac290b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [cab159e1249348a3452959c9c13ce116b3b69933f9b732214dc88eb22f8d259f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-407991
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-407991
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=no-preload-407991
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 18:30:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-407991
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:45:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:41:31 +0000   Mon, 22 Apr 2024 18:30:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:41:31 +0000   Mon, 22 Apr 2024 18:30:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:41:31 +0000   Mon, 22 Apr 2024 18:30:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:41:31 +0000   Mon, 22 Apr 2024 18:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.164
	  Hostname:    no-preload-407991
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d4f172ff26040a2976ef0fc34ce9b7b
	  System UUID:                7d4f172f-f260-40a2-976e-f0fc34ce9b7b
	  Boot ID:                    63c97cfd-5021-47a5-a4b5-dd9d389e4109
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9tt8m                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-fclvg                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-407991                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-407991             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-407991    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-47g8k                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-407991             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-569cc877fc-vrzfj              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-407991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-407991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-407991 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-407991 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-407991 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-407991 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-407991 event: Registered Node no-preload-407991 in Controller
	
	
	==> dmesg <==
	[  +0.059276] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040838] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.997989] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.840596] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.692614] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.820740] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.059704] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063156] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.201647] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.114840] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.313043] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[ +17.117183] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.068467] kauditd_printk_skb: 130 callbacks suppressed
	[Apr22 18:26] systemd-fstab-generator[1363]: Ignoring "noauto" option for root device
	[  +4.590176] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.407316] kauditd_printk_skb: 79 callbacks suppressed
	[Apr22 18:30] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.680055] systemd-fstab-generator[4013]: Ignoring "noauto" option for root device
	[  +4.560553] kauditd_printk_skb: 57 callbacks suppressed
	[  +2.009627] systemd-fstab-generator[4335]: Ignoring "noauto" option for root device
	[Apr22 18:31] systemd-fstab-generator[4545]: Ignoring "noauto" option for root device
	[  +0.122947] kauditd_printk_skb: 14 callbacks suppressed
	[Apr22 18:32] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [22caba79f378925d742d94c273f74d2c246f23e20113736ffbfb7c2a2612edea] <==
	{"level":"info","ts":"2024-04-22T18:30:52.063223Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6a00153c0a3e6122","initial-advertise-peer-urls":["https://192.168.39.164:2380"],"listen-peer-urls":["https://192.168.39.164:2380"],"advertise-client-urls":["https://192.168.39.164:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.164:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T18:30:52.065121Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T18:30:52.065106Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-04-22T18:30:52.065374Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.164:2380"}
	{"level":"info","ts":"2024-04-22T18:30:52.784134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T18:30:52.784202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T18:30:52.784238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 received MsgPreVoteResp from 6a00153c0a3e6122 at term 1"}
	{"level":"info","ts":"2024-04-22T18:30:52.78425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.784255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 received MsgVoteResp from 6a00153c0a3e6122 at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.784263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6a00153c0a3e6122 became leader at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.784274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6a00153c0a3e6122 elected leader 6a00153c0a3e6122 at term 2"}
	{"level":"info","ts":"2024-04-22T18:30:52.789232Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.793336Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6a00153c0a3e6122","local-member-attributes":"{Name:no-preload-407991 ClientURLs:[https://192.168.39.164:2379]}","request-path":"/0/members/6a00153c0a3e6122/attributes","cluster-id":"ae46f2aa0c35daf3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T18:30:52.79518Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae46f2aa0c35daf3","local-member-id":"6a00153c0a3e6122","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.795281Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:30:52.795306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.795377Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T18:30:52.795412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T18:30:52.797304Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T18:30:52.797366Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-22T18:30:52.797441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T18:30:52.798836Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.164:2379"}
	{"level":"info","ts":"2024-04-22T18:40:52.896752Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":710}
	{"level":"info","ts":"2024-04-22T18:40:52.907759Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":710,"took":"10.143723ms","hash":3730175298,"current-db-size-bytes":2150400,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2150400,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-04-22T18:40:52.90786Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3730175298,"revision":710,"compact-revision":-1}
	
	
	==> kernel <==
	 18:45:27 up 20 min,  0 users,  load average: 0.11, 0.19, 0.20
	Linux no-preload-407991 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b532db71bb33f8667a98e4948cd0518db22848d98b54facef1b0e1ef3e25eb07] <==
	I0422 18:38:55.438229       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:40:54.439724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:40:54.439888       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0422 18:40:55.440228       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:40:55.440416       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	W0422 18:40:55.440275       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:40:55.440622       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:40:55.440488       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0422 18:40:55.442607       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:41:55.441763       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:41:55.441850       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:41:55.441864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:41:55.442904       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:41:55.442983       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:41:55.443012       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:43:55.442420       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:43:55.442568       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0422 18:43:55.442584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0422 18:43:55.443892       1 handler_proxy.go:93] no RequestInfo found in the context
	E0422 18:43:55.444027       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:43:55.444126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4e576823a82a08010171f6e05209df250bd41bdf84d8bd141480368d3039ceb0] <==
	I0422 18:39:41.894720       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:40:11.409260       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:40:11.905358       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:40:41.414755       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:40:41.915322       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:41:11.421420       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:41:11.922956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:41:41.427208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:41:41.932457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:42:11.433568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:42:11.941238       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0422 18:42:14.335403       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="196.463µs"
	I0422 18:42:27.330523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="47.968µs"
	E0422 18:42:41.439814       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:42:41.950301       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:43:11.445811       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:43:11.958952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:43:41.451648       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:43:41.971246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:44:11.457726       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:44:11.981286       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:44:41.463568       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:44:41.989817       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0422 18:45:11.472107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0422 18:45:12.000005       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e92f03b86edaa926c1866ba49edfe012e30b1dffbdacddd4e30236b8e933b9ef] <==
	I0422 18:31:12.017636       1 server_linux.go:69] "Using iptables proxy"
	I0422 18:31:12.031254       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.164"]
	I0422 18:31:12.170768       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0422 18:31:12.170817       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0422 18:31:12.170833       1 server_linux.go:165] "Using iptables Proxier"
	I0422 18:31:12.173816       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 18:31:12.174008       1 server.go:872] "Version info" version="v1.30.0"
	I0422 18:31:12.174026       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 18:31:12.179961       1 config.go:192] "Starting service config controller"
	I0422 18:31:12.180118       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 18:31:12.180215       1 config.go:101] "Starting endpoint slice config controller"
	I0422 18:31:12.180245       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 18:31:12.191622       1 config.go:319] "Starting node config controller"
	I0422 18:31:12.191834       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 18:31:12.281256       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0422 18:31:12.281337       1 shared_informer.go:320] Caches are synced for service config
	I0422 18:31:12.291914       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9ce2e44a81d8808869e857f85d448e3191c7e8a6bc37bdd3b64eb6e2f9f224bc] <==
	W0422 18:30:55.322933       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 18:30:55.323010       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 18:30:55.368856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:55.368936       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:55.395688       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 18:30:55.395752       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 18:30:55.416610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 18:30:55.416849       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0422 18:30:55.531149       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:30:55.531251       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0422 18:30:55.565253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0422 18:30:55.565344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0422 18:30:55.584850       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 18:30:55.584904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 18:30:55.673277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:55.673333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:55.692307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:30:55.692359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 18:30:55.772335       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 18:30:55.772428       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:30:55.777161       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:30:55.777219       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0422 18:30:55.815508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:30:55.815559       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0422 18:30:58.283825       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 18:42:57 no-preload-407991 kubelet[4342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:42:57 no-preload-407991 kubelet[4342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:42:57 no-preload-407991 kubelet[4342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:42:57 no-preload-407991 kubelet[4342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:43:11 no-preload-407991 kubelet[4342]: E0422 18:43:11.317800    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:43:22 no-preload-407991 kubelet[4342]: E0422 18:43:22.314745    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:43:37 no-preload-407991 kubelet[4342]: E0422 18:43:37.316728    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:43:52 no-preload-407991 kubelet[4342]: E0422 18:43:52.315713    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:43:57 no-preload-407991 kubelet[4342]: E0422 18:43:57.366829    4342 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:43:57 no-preload-407991 kubelet[4342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:43:57 no-preload-407991 kubelet[4342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:43:57 no-preload-407991 kubelet[4342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:43:57 no-preload-407991 kubelet[4342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:44:07 no-preload-407991 kubelet[4342]: E0422 18:44:07.315956    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:44:18 no-preload-407991 kubelet[4342]: E0422 18:44:18.314035    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:44:32 no-preload-407991 kubelet[4342]: E0422 18:44:32.314687    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:44:47 no-preload-407991 kubelet[4342]: E0422 18:44:47.316412    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:44:57 no-preload-407991 kubelet[4342]: E0422 18:44:57.365122    4342 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 22 18:44:57 no-preload-407991 kubelet[4342]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 22 18:44:57 no-preload-407991 kubelet[4342]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 22 18:44:57 no-preload-407991 kubelet[4342]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 22 18:44:57 no-preload-407991 kubelet[4342]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 22 18:45:00 no-preload-407991 kubelet[4342]: E0422 18:45:00.315527    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:45:11 no-preload-407991 kubelet[4342]: E0422 18:45:11.317173    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	Apr 22 18:45:26 no-preload-407991 kubelet[4342]: E0422 18:45:26.315023    4342 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-vrzfj" podUID="b9751edd-f883-48a0-bc18-1dbc9eec191f"
	
	
	==> storage-provisioner [cdad283db1c7f6885b70cd7adad7d95debcb02dbf4b2447cd00cc969179651d8] <==
	I0422 18:31:13.796431       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:31:13.808844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:31:13.809239       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:31:13.825321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:31:13.825615       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-407991_792cf973-3284-4091-b176-6db56f70a08f!
	I0422 18:31:13.825890       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1af96034-d1fb-4625-a9b9-c59fe9c2410c", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-407991_792cf973-3284-4091-b176-6db56f70a08f became leader
	I0422 18:31:13.929601       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-407991_792cf973-3284-4091-b176-6db56f70a08f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-407991 -n no-preload-407991
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-407991 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-vrzfj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-407991 describe pod metrics-server-569cc877fc-vrzfj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-407991 describe pod metrics-server-569cc877fc-vrzfj: exit status 1 (66.362967ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-vrzfj" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-407991 describe pod metrics-server-569cc877fc-vrzfj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (308.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (155.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:42:45.496211   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:43:09.194222   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:43:20.339334   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:44:22.053623   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:44:37.165182   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:44:50.047464   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
E0422 18:44:57.217048   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.149:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.149:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (265.012707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-367072" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-367072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-367072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.538µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-367072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (245.284696ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-367072 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-367072 logs -n 25: (1.594084054s)
E0422 18:45:07.902333   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-457191 sudo cat                              | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo                                  | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo find                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p calico-457191 sudo crio                             | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p calico-457191                                       | calico-457191                | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-944223 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:16 UTC |
	|         | disable-driver-mounts-944223                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:16 UTC | 22 Apr 24 18:17 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-407991             | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-782377            | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-856422  | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC | 22 Apr 24 18:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:17 UTC |                     |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-407991                  | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-782377                 | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-407991                                   | no-preload-407991            | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:31 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-367072        | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p embed-certs-782377                                  | embed-certs-782377           | jenkins | v1.33.0 | 22 Apr 24 18:19 UTC | 22 Apr 24 18:30 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-856422       | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-856422 | jenkins | v1.33.0 | 22 Apr 24 18:20 UTC | 22 Apr 24 18:30 UTC |
	|         | default-k8s-diff-port-856422                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-367072             | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC | 22 Apr 24 18:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-367072                              | old-k8s-version-367072       | jenkins | v1.33.0 | 22 Apr 24 18:21 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:21:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:21:44.651239   78377 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:21:44.651502   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651512   78377 out.go:304] Setting ErrFile to fd 2...
	I0422 18:21:44.651517   78377 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:21:44.651743   78377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:21:44.652361   78377 out.go:298] Setting JSON to false
	I0422 18:21:44.653361   78377 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7450,"bootTime":1713802655,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:21:44.653418   78377 start.go:139] virtualization: kvm guest
	I0422 18:21:44.655663   78377 out.go:177] * [old-k8s-version-367072] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:21:44.657140   78377 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:21:44.658441   78377 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:21:44.657169   78377 notify.go:220] Checking for updates...
	I0422 18:21:44.661128   78377 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:21:44.662518   78377 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:21:44.663775   78377 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:21:44.665418   78377 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:21:44.667565   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:21:44.667940   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.667974   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.682806   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36577
	I0422 18:21:44.683248   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.683772   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.683796   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.684162   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.684386   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.686458   78377 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:21:44.688047   78377 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:21:44.688430   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:21:44.688471   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:21:44.703069   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40021
	I0422 18:21:44.703543   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:21:44.704022   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:21:44.704045   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:21:44.704344   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:21:44.704551   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:21:44.740500   78377 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 18:21:44.741959   78377 start.go:297] selected driver: kvm2
	I0422 18:21:44.741977   78377 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.742115   78377 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:21:44.742852   78377 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.742936   78377 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 18:21:44.757771   78377 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 18:21:44.758147   78377 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:21:44.758223   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:21:44.758237   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:21:44.758283   78377 start.go:340] cluster config:
	{Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:21:44.758417   78377 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:21:44.760296   78377 out.go:177] * Starting "old-k8s-version-367072" primary control-plane node in "old-k8s-version-367072" cluster
	I0422 18:21:44.761538   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:21:44.761589   78377 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 18:21:44.761603   78377 cache.go:56] Caching tarball of preloaded images
	I0422 18:21:44.761682   78377 preload.go:173] Found /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0422 18:21:44.761696   78377 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0422 18:21:44.761815   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:21:44.762033   78377 start.go:360] acquireMachinesLock for old-k8s-version-367072: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:21:45.719482   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:48.791433   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:54.871446   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:21:57.943441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:04.023441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:07.095417   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:13.175430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:16.247522   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:22.327414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:25.399441   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:31.479440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:34.551439   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:40.631451   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:43.703447   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:49.783400   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:52.855484   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:22:58.935464   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:02.007435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:08.087442   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:11.159452   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:17.239435   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:20.311430   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:26.391420   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:29.463418   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:35.543443   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:38.615421   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:44.695419   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:47.767475   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:53.847471   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:23:56.919436   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:02.999404   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:06.071458   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:12.151440   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:15.223414   77400 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.164:22: connect: no route to host
	I0422 18:24:18.227587   77634 start.go:364] duration metric: took 4m29.759611802s to acquireMachinesLock for "embed-certs-782377"
	I0422 18:24:18.227650   77634 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:18.227661   77634 fix.go:54] fixHost starting: 
	I0422 18:24:18.227979   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:18.228013   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:18.243001   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33313
	I0422 18:24:18.243415   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:18.243835   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:24:18.243850   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:18.244219   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:18.244384   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:18.244534   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:24:18.246202   77634 fix.go:112] recreateIfNeeded on embed-certs-782377: state=Stopped err=<nil>
	I0422 18:24:18.246228   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	W0422 18:24:18.246399   77634 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:18.248257   77634 out.go:177] * Restarting existing kvm2 VM for "embed-certs-782377" ...
	I0422 18:24:18.249777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Start
	I0422 18:24:18.249966   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring networks are active...
	I0422 18:24:18.250666   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network default is active
	I0422 18:24:18.251036   77634 main.go:141] libmachine: (embed-certs-782377) Ensuring network mk-embed-certs-782377 is active
	I0422 18:24:18.251499   77634 main.go:141] libmachine: (embed-certs-782377) Getting domain xml...
	I0422 18:24:18.252150   77634 main.go:141] libmachine: (embed-certs-782377) Creating domain...
	I0422 18:24:18.225125   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:18.225168   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225565   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:24:18.225593   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:24:18.225781   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:24:18.227460   77400 machine.go:97] duration metric: took 4m37.410379606s to provisionDockerMachine
	I0422 18:24:18.227495   77400 fix.go:56] duration metric: took 4m37.433636251s for fixHost
	I0422 18:24:18.227499   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 4m37.433656207s
	W0422 18:24:18.227517   77400 start.go:713] error starting host: provision: host is not running
	W0422 18:24:18.227584   77400 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0422 18:24:18.227593   77400 start.go:728] Will try again in 5 seconds ...
	I0422 18:24:19.442937   77634 main.go:141] libmachine: (embed-certs-782377) Waiting to get IP...
	I0422 18:24:19.444048   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.444425   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.444484   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.444392   78906 retry.go:31] will retry after 283.008432ms: waiting for machine to come up
	I0422 18:24:19.729076   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.729457   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.729493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.729411   78906 retry.go:31] will retry after 252.047573ms: waiting for machine to come up
	I0422 18:24:19.983011   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:19.983417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:19.983442   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:19.983397   78906 retry.go:31] will retry after 300.528755ms: waiting for machine to come up
	I0422 18:24:20.286039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.286467   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.286500   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.286425   78906 retry.go:31] will retry after 426.555496ms: waiting for machine to come up
	I0422 18:24:20.715191   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:20.715601   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:20.715638   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:20.715525   78906 retry.go:31] will retry after 533.433633ms: waiting for machine to come up
	I0422 18:24:21.250151   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:21.250702   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:21.250732   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:21.250646   78906 retry.go:31] will retry after 854.033547ms: waiting for machine to come up
	I0422 18:24:22.106728   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.107083   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.107109   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.107036   78906 retry.go:31] will retry after 761.233698ms: waiting for machine to come up
	I0422 18:24:22.870007   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:22.870408   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:22.870435   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:22.870364   78906 retry.go:31] will retry after 1.121568589s: waiting for machine to come up
	I0422 18:24:23.229316   77400 start.go:360] acquireMachinesLock for no-preload-407991: {Name:mk64c43b652bcca7a12d3e78dcc142e8b5982f60 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0422 18:24:23.993127   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:23.993600   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:23.993623   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:23.993535   78906 retry.go:31] will retry after 1.525222377s: waiting for machine to come up
	I0422 18:24:25.520203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:25.520584   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:25.520609   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:25.520557   78906 retry.go:31] will retry after 1.618927059s: waiting for machine to come up
	I0422 18:24:27.140862   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:27.141363   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:27.141391   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:27.141315   78906 retry.go:31] will retry after 1.828869827s: waiting for machine to come up
	I0422 18:24:28.972053   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:28.972472   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:28.972508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:28.972438   78906 retry.go:31] will retry after 2.456935091s: waiting for machine to come up
	I0422 18:24:31.430825   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:31.431208   77634 main.go:141] libmachine: (embed-certs-782377) DBG | unable to find current IP address of domain embed-certs-782377 in network mk-embed-certs-782377
	I0422 18:24:31.431266   77634 main.go:141] libmachine: (embed-certs-782377) DBG | I0422 18:24:31.431181   78906 retry.go:31] will retry after 3.415431602s: waiting for machine to come up
	I0422 18:24:36.144008   77929 start.go:364] duration metric: took 4m11.537292071s to acquireMachinesLock for "default-k8s-diff-port-856422"
	I0422 18:24:36.144073   77929 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:36.144079   77929 fix.go:54] fixHost starting: 
	I0422 18:24:36.144413   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:36.144450   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:36.161253   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43617
	I0422 18:24:36.161715   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:36.162147   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:24:36.162166   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:36.162536   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:36.162743   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:36.162914   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:24:36.164366   77929 fix.go:112] recreateIfNeeded on default-k8s-diff-port-856422: state=Stopped err=<nil>
	I0422 18:24:36.164397   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	W0422 18:24:36.164563   77929 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:36.166915   77929 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-856422" ...
	I0422 18:24:34.847819   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848316   77634 main.go:141] libmachine: (embed-certs-782377) Found IP for machine: 192.168.50.114
	I0422 18:24:34.848339   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has current primary IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.848357   77634 main.go:141] libmachine: (embed-certs-782377) Reserving static IP address...
	I0422 18:24:34.848741   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.848769   77634 main.go:141] libmachine: (embed-certs-782377) DBG | skip adding static IP to network mk-embed-certs-782377 - found existing host DHCP lease matching {name: "embed-certs-782377", mac: "52:54:00:ab:0f:f2", ip: "192.168.50.114"}
	I0422 18:24:34.848782   77634 main.go:141] libmachine: (embed-certs-782377) Reserved static IP address: 192.168.50.114
	I0422 18:24:34.848801   77634 main.go:141] libmachine: (embed-certs-782377) Waiting for SSH to be available...
	I0422 18:24:34.848808   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Getting to WaitForSSH function...
	I0422 18:24:34.850829   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851167   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.851199   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.851332   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH client type: external
	I0422 18:24:34.851352   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa (-rw-------)
	I0422 18:24:34.851383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:34.851402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | About to run SSH command:
	I0422 18:24:34.851417   77634 main.go:141] libmachine: (embed-certs-782377) DBG | exit 0
	I0422 18:24:34.975383   77634 main.go:141] libmachine: (embed-certs-782377) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:34.975812   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetConfigRaw
	I0422 18:24:34.976602   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:34.979578   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.979959   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.979992   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.980238   77634 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/config.json ...
	I0422 18:24:34.980472   77634 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:34.980497   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:34.980777   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:34.983493   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.983958   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:34.983999   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:34.984175   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:34.984372   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:34.984710   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:34.984894   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:34.985074   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:34.985086   77634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:35.099838   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:35.099873   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100144   77634 buildroot.go:166] provisioning hostname "embed-certs-782377"
	I0422 18:24:35.100169   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.100381   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.103203   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103589   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.103618   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.103754   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.103930   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104116   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.104262   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.104446   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.104696   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.104720   77634 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-782377 && echo "embed-certs-782377" | sudo tee /etc/hostname
	I0422 18:24:35.223934   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-782377
	
	I0422 18:24:35.223962   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.227033   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227376   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.227413   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.227598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.227779   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.227976   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.228140   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.228334   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.228492   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.228508   77634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-782377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-782377/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-782377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:35.346513   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:35.346545   77634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:35.346561   77634 buildroot.go:174] setting up certificates
	I0422 18:24:35.346571   77634 provision.go:84] configureAuth start
	I0422 18:24:35.346598   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetMachineName
	I0422 18:24:35.346898   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:35.349820   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350164   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.350192   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.350301   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.352921   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353288   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.353314   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.353488   77634 provision.go:143] copyHostCerts
	I0422 18:24:35.353543   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:35.353552   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:35.353619   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:35.353717   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:35.353725   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:35.353749   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:35.353801   77634 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:35.353810   77634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:35.353831   77634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:35.353894   77634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.embed-certs-782377 san=[127.0.0.1 192.168.50.114 embed-certs-782377 localhost minikube]
	I0422 18:24:35.463676   77634 provision.go:177] copyRemoteCerts
	I0422 18:24:35.463733   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:35.463758   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.466567   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.467039   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.467233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.467415   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.467605   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.467740   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.549947   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:35.576364   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:24:35.601539   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:35.625959   77634 provision.go:87] duration metric: took 279.37435ms to configureAuth
	I0422 18:24:35.625992   77634 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:35.626171   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:35.626235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.629095   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629508   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.629533   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.629707   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.629934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630077   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.630238   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.630365   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:35.630546   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:35.630563   77634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:35.906862   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:35.906892   77634 machine.go:97] duration metric: took 926.403466ms to provisionDockerMachine
	I0422 18:24:35.906905   77634 start.go:293] postStartSetup for "embed-certs-782377" (driver="kvm2")
	I0422 18:24:35.906916   77634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:35.906934   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:35.907241   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:35.907277   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:35.910029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910402   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:35.910438   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:35.910599   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:35.910782   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:35.910993   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:35.911168   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:35.994189   77634 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:35.998376   77634 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:35.998395   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:35.998468   77634 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:35.998545   77634 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:35.998650   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:36.008268   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:36.034031   77634 start.go:296] duration metric: took 127.110389ms for postStartSetup
	I0422 18:24:36.034081   77634 fix.go:56] duration metric: took 17.806421597s for fixHost
	I0422 18:24:36.034100   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.036964   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037357   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.037380   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.037552   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.037775   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038051   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.038233   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.038403   77634 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:36.038568   77634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.114 22 <nil> <nil>}
	I0422 18:24:36.038579   77634 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:36.143878   77634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810276.108619822
	
	I0422 18:24:36.143903   77634 fix.go:216] guest clock: 1713810276.108619822
	I0422 18:24:36.143911   77634 fix.go:229] Guest: 2024-04-22 18:24:36.108619822 +0000 UTC Remote: 2024-04-22 18:24:36.034084746 +0000 UTC m=+287.715620683 (delta=74.535076ms)
	I0422 18:24:36.143936   77634 fix.go:200] guest clock delta is within tolerance: 74.535076ms
	I0422 18:24:36.143941   77634 start.go:83] releasing machines lock for "embed-certs-782377", held for 17.916313877s
	I0422 18:24:36.143966   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.144235   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:36.146867   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147228   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.147257   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.147431   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.147883   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148066   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:24:36.148171   77634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:36.148218   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.148377   77634 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:36.148403   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:24:36.150838   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151150   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151176   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151268   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151296   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.151466   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.151628   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.151671   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:36.151695   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:36.151747   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.151880   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:24:36.152055   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:24:36.152209   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:24:36.152350   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:24:36.229109   77634 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:36.266621   77634 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:36.421344   77634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:36.427814   77634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:36.427892   77634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:36.448157   77634 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:36.448192   77634 start.go:494] detecting cgroup driver to use...
	I0422 18:24:36.448255   77634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:36.468930   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:36.485780   77634 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:36.485856   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:36.502182   77634 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:36.521179   77634 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:36.636244   77634 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:36.783292   77634 docker.go:233] disabling docker service ...
	I0422 18:24:36.783366   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:36.803014   77634 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:36.817938   77634 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:36.957954   77634 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:37.085750   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:37.101054   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:37.123504   77634 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:37.123555   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.134422   77634 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:37.134491   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.145961   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.157192   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.170117   77634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:37.188656   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.205792   77634 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.225739   77634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:37.236719   77634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:37.246351   77634 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:37.246401   77634 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:37.261144   77634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:37.271464   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:37.395686   77634 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:37.534079   77634 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:37.534156   77634 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:37.539212   77634 start.go:562] Will wait 60s for crictl version
	I0422 18:24:37.539285   77634 ssh_runner.go:195] Run: which crictl
	I0422 18:24:37.543239   77634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:37.581460   77634 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:37.581562   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.611743   77634 ssh_runner.go:195] Run: crio --version
	I0422 18:24:37.645811   77634 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:37.647247   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetIP
	I0422 18:24:37.650321   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.650811   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:24:37.650841   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:24:37.651055   77634 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:37.655865   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:37.673617   77634 kubeadm.go:877] updating cluster {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:37.673732   77634 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:37.673785   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:37.718534   77634 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:37.718609   77634 ssh_runner.go:195] Run: which lz4
	I0422 18:24:37.723369   77634 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:37.728270   77634 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:37.728303   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:36.168344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Start
	I0422 18:24:36.168494   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring networks are active...
	I0422 18:24:36.169419   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network default is active
	I0422 18:24:36.169811   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Ensuring network mk-default-k8s-diff-port-856422 is active
	I0422 18:24:36.170341   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Getting domain xml...
	I0422 18:24:36.171019   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Creating domain...
	I0422 18:24:37.407148   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting to get IP...
	I0422 18:24:37.408083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408430   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.408509   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.408416   79040 retry.go:31] will retry after 267.855158ms: waiting for machine to come up
	I0422 18:24:37.677765   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678134   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.678168   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.678084   79040 retry.go:31] will retry after 267.61504ms: waiting for machine to come up
	I0422 18:24:37.947737   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948250   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:37.948276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:37.948216   79040 retry.go:31] will retry after 351.088664ms: waiting for machine to come up
	I0422 18:24:38.300548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301057   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.301090   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.301011   79040 retry.go:31] will retry after 560.164848ms: waiting for machine to come up
	I0422 18:24:38.862557   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863114   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:38.863157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:38.863075   79040 retry.go:31] will retry after 590.286684ms: waiting for machine to come up
	I0422 18:24:39.454925   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455483   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:39.455510   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:39.455428   79040 retry.go:31] will retry after 870.474888ms: waiting for machine to come up
	I0422 18:24:39.338447   77634 crio.go:462] duration metric: took 1.615205556s to copy over tarball
	I0422 18:24:39.338545   77634 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:41.640474   77634 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.301883484s)
	I0422 18:24:41.640514   77634 crio.go:469] duration metric: took 2.302038123s to extract the tarball
	I0422 18:24:41.640524   77634 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:24:41.680325   77634 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:41.724755   77634 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:24:41.724777   77634 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:24:41.724785   77634 kubeadm.go:928] updating node { 192.168.50.114 8443 v1.30.0 crio true true} ...
	I0422 18:24:41.724887   77634 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-782377 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:24:41.724964   77634 ssh_runner.go:195] Run: crio config
	I0422 18:24:41.772680   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:41.772704   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:41.772715   77634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:24:41.772733   77634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.114 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-782377 NodeName:embed-certs-782377 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:24:41.772898   77634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-782377"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:24:41.772964   77634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:24:41.783492   77634 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:24:41.783575   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:24:41.793500   77634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0422 18:24:41.810415   77634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:24:41.827504   77634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0422 18:24:41.845704   77634 ssh_runner.go:195] Run: grep 192.168.50.114	control-plane.minikube.internal$ /etc/hosts
	I0422 18:24:41.849728   77634 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.114	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:41.862798   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:41.998260   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:24:42.018779   77634 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377 for IP: 192.168.50.114
	I0422 18:24:42.018801   77634 certs.go:194] generating shared ca certs ...
	I0422 18:24:42.018820   77634 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:24:42.018977   77634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:24:42.019034   77634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:24:42.019048   77634 certs.go:256] generating profile certs ...
	I0422 18:24:42.019146   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/client.key
	I0422 18:24:42.019218   77634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key.d804c20e
	I0422 18:24:42.019298   77634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key
	I0422 18:24:42.019455   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:24:42.019493   77634 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:24:42.019509   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:24:42.019539   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:24:42.019571   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:24:42.019606   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:24:42.019665   77634 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:42.020460   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:24:42.065297   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:24:42.098581   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:24:42.139751   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:24:42.169770   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0422 18:24:42.199958   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:24:42.229298   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:24:42.254517   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/embed-certs-782377/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:24:42.279390   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:24:42.303872   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:24:42.329704   77634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:24:42.355108   77634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:24:42.372684   77634 ssh_runner.go:195] Run: openssl version
	I0422 18:24:42.378631   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:24:42.389709   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394492   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.394552   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:24:42.400346   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:24:42.411335   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:24:42.422568   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427213   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.427278   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:24:42.433277   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:24:42.444618   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:24:42.455793   77634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460681   77634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.460739   77634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:24:42.466785   77634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:24:42.485401   77634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:24:42.491205   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:24:42.498635   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:24:42.510577   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:24:42.517596   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:24:42.524413   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:24:42.530872   77634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:24:42.537199   77634 kubeadm.go:391] StartCluster: {Name:embed-certs-782377 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:embed-certs-782377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:24:42.537319   77634 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:24:42.537379   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.579863   77634 cri.go:89] found id: ""
	I0422 18:24:42.579944   77634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:24:42.590756   77634 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:24:42.590781   77634 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:24:42.590788   77634 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:24:42.590844   77634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:24:42.601517   77634 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:24:42.603120   77634 kubeconfig.go:125] found "embed-certs-782377" server: "https://192.168.50.114:8443"
	I0422 18:24:42.606189   77634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:24:42.616881   77634 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.114
	I0422 18:24:42.616911   77634 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:24:42.616922   77634 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:24:42.616970   77634 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:24:42.656829   77634 cri.go:89] found id: ""
	I0422 18:24:42.656923   77634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:24:42.675575   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:24:42.686408   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:24:42.686431   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:24:42.686484   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:24:42.697303   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:24:42.697391   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:24:42.707693   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:24:42.717836   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:24:42.717932   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:24:42.729952   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.740902   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:24:42.740980   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:24:42.751946   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:24:42.761758   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:24:42.761830   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:24:42.772699   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:24:42.783018   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:42.891737   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:40.327325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327782   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:40.327834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:40.327726   79040 retry.go:31] will retry after 926.321969ms: waiting for machine to come up
	I0422 18:24:41.255601   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256117   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:41.256147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:41.256072   79040 retry.go:31] will retry after 928.33371ms: waiting for machine to come up
	I0422 18:24:42.186290   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186798   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:42.186826   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:42.186762   79040 retry.go:31] will retry after 1.708117553s: waiting for machine to come up
	I0422 18:24:43.896236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:43.896682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:43.896597   79040 retry.go:31] will retry after 1.720003793s: waiting for machine to come up
	I0422 18:24:44.055395   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.163622709s)
	I0422 18:24:44.055429   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.278840   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.351743   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:44.460115   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:24:44.460202   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:44.960631   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.460588   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:45.478048   77634 api_server.go:72] duration metric: took 1.017932232s to wait for apiserver process to appear ...
	I0422 18:24:45.478082   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:24:45.478104   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:45.478702   77634 api_server.go:269] stopped: https://192.168.50.114:8443/healthz: Get "https://192.168.50.114:8443/healthz": dial tcp 192.168.50.114:8443: connect: connection refused
	I0422 18:24:45.978527   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.247298   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:24:48.247334   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:24:48.247351   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.295953   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.296005   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.478899   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.488884   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.488920   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:48.978472   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:48.992521   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:24:48.992552   77634 api_server.go:103] status: https://192.168.50.114:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:24:49.479179   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:24:49.485588   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:24:49.493015   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:24:49.493055   77634 api_server.go:131] duration metric: took 4.01496465s to wait for apiserver health ...
	I0422 18:24:49.493065   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:24:49.493074   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:24:49.494997   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:24:45.618240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618714   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:45.618744   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:45.618673   79040 retry.go:31] will retry after 2.396679945s: waiting for machine to come up
	I0422 18:24:48.016812   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017231   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:48.017258   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:48.017197   79040 retry.go:31] will retry after 2.304959564s: waiting for machine to come up
	I0422 18:24:49.496476   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:24:49.516525   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:24:49.541103   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:24:49.552224   77634 system_pods.go:59] 8 kube-system pods found
	I0422 18:24:49.552263   77634 system_pods.go:61] "coredns-7db6d8ff4d-lxcv2" [137ad3db-8bc5-4b7f-8eb0-12a278eba41c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:24:49.552273   77634 system_pods.go:61] "etcd-embed-certs-782377" [85322e31-1ad6-4239-8086-f2a465a28d8e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:24:49.552287   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [e791d7d4-a94d-4cce-a50d-4e569350f210] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:24:49.552307   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [cbcc2e7f-7b3a-435b-97d5-5b69b7e399c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:24:49.552317   77634 system_pods.go:61] "kube-proxy-r4249" [7ffb3b8f-53d8-45df-8426-74f0ffb0d20a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0422 18:24:49.552327   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [9568040b-3eca-403e-b078-d6f2071e70c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:24:49.552335   77634 system_pods.go:61] "metrics-server-569cc877fc-d8s5p" [3bcda1df-02f7-4405-95c7-4d8559a0138c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:24:49.552342   77634 system_pods.go:61] "storage-provisioner" [c196d779-346a-4e3f-b1c3-dde4292df017] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0422 18:24:49.552351   77634 system_pods.go:74] duration metric: took 11.221599ms to wait for pod list to return data ...
	I0422 18:24:49.552373   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:24:49.556086   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:24:49.556130   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:24:49.556142   77634 node_conditions.go:105] duration metric: took 3.764067ms to run NodePressure ...
	I0422 18:24:49.556161   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:24:49.852023   77634 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856866   77634 kubeadm.go:733] kubelet initialised
	I0422 18:24:49.856894   77634 kubeadm.go:734] duration metric: took 4.83996ms waiting for restarted kubelet to initialise ...
	I0422 18:24:49.856904   77634 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:24:49.863808   77634 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.868817   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868840   77634 pod_ready.go:81] duration metric: took 5.001181ms for pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.868849   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "coredns-7db6d8ff4d-lxcv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.868855   77634 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.873591   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873612   77634 pod_ready.go:81] duration metric: took 4.750292ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.873621   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "etcd-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.873627   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.878471   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878494   77634 pod_ready.go:81] duration metric: took 4.859998ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.878503   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.878510   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:49.945869   77634 pod_ready.go:97] node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945909   77634 pod_ready.go:81] duration metric: took 67.385628ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	E0422 18:24:49.945923   77634 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-782377" hosting pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-782377" has status "Ready":"False"
	I0422 18:24:49.945932   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345633   77634 pod_ready.go:92] pod "kube-proxy-r4249" in "kube-system" namespace has status "Ready":"True"
	I0422 18:24:50.345655   77634 pod_ready.go:81] duration metric: took 399.713725ms for pod "kube-proxy-r4249" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:50.345666   77634 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:24:52.352988   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:50.324396   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324920   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | unable to find current IP address of domain default-k8s-diff-port-856422 in network mk-default-k8s-diff-port-856422
	I0422 18:24:50.324953   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | I0422 18:24:50.324894   79040 retry.go:31] will retry after 4.018790507s: waiting for machine to come up
	I0422 18:24:54.347584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348046   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Found IP for machine: 192.168.61.206
	I0422 18:24:54.348081   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has current primary IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.348094   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserving static IP address...
	I0422 18:24:54.348535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Reserved static IP address: 192.168.61.206
	I0422 18:24:54.348560   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Waiting for SSH to be available...
	I0422 18:24:54.348584   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.348624   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | skip adding static IP to network mk-default-k8s-diff-port-856422 - found existing host DHCP lease matching {name: "default-k8s-diff-port-856422", mac: "52:54:00:df:4a:d1", ip: "192.168.61.206"}
	I0422 18:24:54.348640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Getting to WaitForSSH function...
	I0422 18:24:54.351069   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351570   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.351608   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.351727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH client type: external
	I0422 18:24:54.351758   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa (-rw-------)
	I0422 18:24:54.351793   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:24:54.351810   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | About to run SSH command:
	I0422 18:24:54.351834   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | exit 0
	I0422 18:24:54.479277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | SSH cmd err, output: <nil>: 
	I0422 18:24:54.479674   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetConfigRaw
	I0422 18:24:54.480350   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.483089   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.483498   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.483801   77929 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/config.json ...
	I0422 18:24:54.484031   77929 machine.go:94] provisionDockerMachine start ...
	I0422 18:24:54.484051   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:54.484272   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.486449   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.486857   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.486992   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.487178   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487344   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.487470   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.487635   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.487825   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.487838   77929 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:24:55.812288   78377 start.go:364] duration metric: took 3m11.050220887s to acquireMachinesLock for "old-k8s-version-367072"
	I0422 18:24:55.812348   78377 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:24:55.812359   78377 fix.go:54] fixHost starting: 
	I0422 18:24:55.812769   78377 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:24:55.812806   78377 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:24:55.830114   78377 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0422 18:24:55.830528   78377 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:24:55.831130   78377 main.go:141] libmachine: Using API Version  1
	I0422 18:24:55.831155   78377 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:24:55.831459   78377 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:24:55.831688   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:24:55.831855   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetState
	I0422 18:24:55.833322   78377 fix.go:112] recreateIfNeeded on old-k8s-version-367072: state=Stopped err=<nil>
	I0422 18:24:55.833351   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	W0422 18:24:55.833481   78377 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:24:55.835517   78377 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-367072" ...
	I0422 18:24:54.603732   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:24:54.603759   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.603993   77929 buildroot.go:166] provisioning hostname "default-k8s-diff-port-856422"
	I0422 18:24:54.604017   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.604280   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.606938   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607302   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.607331   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.607524   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.607693   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.607856   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.608002   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.608174   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.608381   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.608398   77929 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-856422 && echo "default-k8s-diff-port-856422" | sudo tee /etc/hostname
	I0422 18:24:54.734622   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-856422
	
	I0422 18:24:54.734646   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.737804   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738109   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.738141   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.738236   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:54.738495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738650   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:54.738773   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:54.738950   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:54.739157   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:54.739176   77929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-856422' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-856422/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-856422' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:24:54.864646   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:24:54.864679   77929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:24:54.864732   77929 buildroot.go:174] setting up certificates
	I0422 18:24:54.864745   77929 provision.go:84] configureAuth start
	I0422 18:24:54.864764   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetMachineName
	I0422 18:24:54.865059   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:54.868205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868626   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.868666   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.868868   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:54.871736   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872118   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:54.872147   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:54.872275   77929 provision.go:143] copyHostCerts
	I0422 18:24:54.872340   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:24:54.872353   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:24:54.872424   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:24:54.872545   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:24:54.872557   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:24:54.872598   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:24:54.872676   77929 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:24:54.872688   77929 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:24:54.872718   77929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:24:54.872794   77929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-856422 san=[127.0.0.1 192.168.61.206 default-k8s-diff-port-856422 localhost minikube]
	I0422 18:24:55.091765   77929 provision.go:177] copyRemoteCerts
	I0422 18:24:55.091820   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:24:55.091848   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.094572   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.094939   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.094970   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.095209   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.095501   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.095767   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.095958   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.192243   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:24:55.223313   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0422 18:24:55.250149   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:24:55.279442   77929 provision.go:87] duration metric: took 414.679508ms to configureAuth
	I0422 18:24:55.279474   77929 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:24:55.280056   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:24:55.280125   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.282806   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283205   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.283237   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.283405   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.283636   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283803   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.283941   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.284109   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.284276   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.284294   77929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:24:55.565199   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:24:55.565225   77929 machine.go:97] duration metric: took 1.081180365s to provisionDockerMachine
	I0422 18:24:55.565239   77929 start.go:293] postStartSetup for "default-k8s-diff-port-856422" (driver="kvm2")
	I0422 18:24:55.565282   77929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:24:55.565312   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.565649   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:24:55.565682   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.568211   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.568614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.568809   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.568994   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.569182   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.569352   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.654461   77929 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:24:55.658992   77929 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:24:55.659016   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:24:55.659091   77929 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:24:55.659199   77929 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:24:55.659309   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:24:55.669183   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:24:55.694953   77929 start.go:296] duration metric: took 129.698973ms for postStartSetup
	I0422 18:24:55.694998   77929 fix.go:56] duration metric: took 19.550918724s for fixHost
	I0422 18:24:55.695021   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.697596   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.697926   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.697958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.698133   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.698325   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698479   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.698579   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.698680   77929 main.go:141] libmachine: Using SSH client type: native
	I0422 18:24:55.698897   77929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.206 22 <nil> <nil>}
	I0422 18:24:55.698914   77929 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:24:55.812106   77929 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810295.778892948
	
	I0422 18:24:55.812132   77929 fix.go:216] guest clock: 1713810295.778892948
	I0422 18:24:55.812143   77929 fix.go:229] Guest: 2024-04-22 18:24:55.778892948 +0000 UTC Remote: 2024-04-22 18:24:55.69500303 +0000 UTC m=+271.245786903 (delta=83.889918ms)
	I0422 18:24:55.812168   77929 fix.go:200] guest clock delta is within tolerance: 83.889918ms
	I0422 18:24:55.812176   77929 start.go:83] releasing machines lock for "default-k8s-diff-port-856422", held for 19.668119564s
	I0422 18:24:55.812213   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.812500   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:55.815404   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.815786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.815828   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.816036   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816526   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816698   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:24:55.816781   77929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:24:55.816823   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.817092   77929 ssh_runner.go:195] Run: cat /version.json
	I0422 18:24:55.817116   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:24:55.819495   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819710   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.819931   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.819958   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820045   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820157   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:55.820181   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:55.820217   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820362   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:24:55.820366   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820535   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:24:55.820631   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.820716   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:24:55.820845   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:24:55.904810   77929 ssh_runner.go:195] Run: systemctl --version
	I0422 18:24:55.937093   77929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:24:56.089389   77929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:24:56.096144   77929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:24:56.096208   77929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:24:56.118194   77929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:24:56.118224   77929 start.go:494] detecting cgroup driver to use...
	I0422 18:24:56.118292   77929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:24:56.134918   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:24:56.154107   77929 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:24:56.154180   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:24:56.168971   77929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:24:56.188793   77929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:24:56.310223   77929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:24:56.492316   77929 docker.go:233] disabling docker service ...
	I0422 18:24:56.492430   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:24:56.515169   77929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:24:56.529734   77929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:24:56.670628   77929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:24:56.810823   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:24:56.826785   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:24:56.847682   77929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:24:56.847741   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.860499   77929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:24:56.860576   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.872086   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.883347   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.901596   77929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:24:56.916912   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.928121   77929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.947335   77929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:24:56.958431   77929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:24:56.968077   77929 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:24:56.968131   77929 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:24:56.982135   77929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:24:56.991801   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:24:57.125635   77929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:24:57.263889   77929 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:24:57.263973   77929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:24:57.269573   77929 start.go:562] Will wait 60s for crictl version
	I0422 18:24:57.269627   77929 ssh_runner.go:195] Run: which crictl
	I0422 18:24:57.273613   77929 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:24:57.314357   77929 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:24:57.314463   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.345062   77929 ssh_runner.go:195] Run: crio --version
	I0422 18:24:57.380868   77929 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:24:54.353338   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:56.853757   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:24:57.382284   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetIP
	I0422 18:24:57.385215   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385614   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:24:57.385655   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:24:57.385889   77929 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0422 18:24:57.390482   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:24:57.405644   77929 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:24:57.405766   77929 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:24:57.405868   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:24:57.452528   77929 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:24:57.452604   77929 ssh_runner.go:195] Run: which lz4
	I0422 18:24:57.456903   77929 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:24:57.461373   77929 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:24:57.461411   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394544937 bytes)
	I0422 18:24:59.060426   77929 crio.go:462] duration metric: took 1.603560712s to copy over tarball
	I0422 18:24:59.060532   77929 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:24:55.836947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .Start
	I0422 18:24:55.837156   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring networks are active...
	I0422 18:24:55.837991   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network default is active
	I0422 18:24:55.838340   78377 main.go:141] libmachine: (old-k8s-version-367072) Ensuring network mk-old-k8s-version-367072 is active
	I0422 18:24:55.838802   78377 main.go:141] libmachine: (old-k8s-version-367072) Getting domain xml...
	I0422 18:24:55.839484   78377 main.go:141] libmachine: (old-k8s-version-367072) Creating domain...
	I0422 18:24:57.114447   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting to get IP...
	I0422 18:24:57.115418   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.115808   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.115885   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.115780   79197 retry.go:31] will retry after 292.692957ms: waiting for machine to come up
	I0422 18:24:57.410220   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.410760   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.410793   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.410707   79197 retry.go:31] will retry after 381.746596ms: waiting for machine to come up
	I0422 18:24:57.794121   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:57.794537   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:57.794561   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:57.794500   79197 retry.go:31] will retry after 343.501318ms: waiting for machine to come up
	I0422 18:24:58.140203   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.140843   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.140872   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.140795   79197 retry.go:31] will retry after 497.222481ms: waiting for machine to come up
	I0422 18:24:58.639611   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:58.640103   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:58.640133   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:58.640061   79197 retry.go:31] will retry after 578.746837ms: waiting for machine to come up
	I0422 18:24:59.220771   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.221312   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.221342   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.221264   79197 retry.go:31] will retry after 773.821721ms: waiting for machine to come up
	I0422 18:24:58.854112   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:00.856147   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:01.563849   77929 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.503290941s)
	I0422 18:25:01.563881   77929 crio.go:469] duration metric: took 2.503413712s to extract the tarball
	I0422 18:25:01.563891   77929 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:01.603330   77929 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:01.649885   77929 crio.go:514] all images are preloaded for cri-o runtime.
	I0422 18:25:01.649909   77929 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:25:01.649916   77929 kubeadm.go:928] updating node { 192.168.61.206 8444 v1.30.0 crio true true} ...
	I0422 18:25:01.650053   77929 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-856422 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:01.650143   77929 ssh_runner.go:195] Run: crio config
	I0422 18:25:01.698892   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:01.698915   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:01.698929   77929 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:01.698948   77929 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.206 APIServerPort:8444 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-856422 NodeName:default-k8s-diff-port-856422 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:01.699075   77929 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.206
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-856422"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:01.699150   77929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:01.709830   77929 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:01.709903   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:01.720447   77929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0422 18:25:01.738745   77929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:01.756420   77929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0422 18:25:01.775364   77929 ssh_runner.go:195] Run: grep 192.168.61.206	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:01.779476   77929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:01.792860   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:01.920607   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:01.939637   77929 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422 for IP: 192.168.61.206
	I0422 18:25:01.939658   77929 certs.go:194] generating shared ca certs ...
	I0422 18:25:01.939675   77929 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:01.939858   77929 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:01.939911   77929 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:01.939922   77929 certs.go:256] generating profile certs ...
	I0422 18:25:01.940026   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/client.key
	I0422 18:25:01.940105   77929 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key.e8400874
	I0422 18:25:01.940170   77929 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key
	I0422 18:25:01.940320   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:01.940386   77929 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:01.940400   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:01.940437   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:01.940474   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:01.940506   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:01.940603   77929 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:01.941408   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:01.981392   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:02.020335   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:02.057221   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:02.088571   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0422 18:25:02.123716   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:02.153926   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:02.183499   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/default-k8s-diff-port-856422/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:02.212438   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:02.238650   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:02.265786   77929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:02.295001   77929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:02.315343   77929 ssh_runner.go:195] Run: openssl version
	I0422 18:25:02.322001   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:02.334785   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340619   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.340686   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:02.348942   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:02.364960   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:02.381460   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386720   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.386794   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:02.392894   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:02.404951   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:02.417334   77929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423503   77929 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.423573   77929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:02.430512   77929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:02.444132   77929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:02.449749   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:02.456667   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:02.463700   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:02.470474   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:02.477324   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:02.483900   77929 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:02.490614   77929 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-856422 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:default-k8s-diff-port-856422 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:02.490719   77929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:02.490768   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.538766   77929 cri.go:89] found id: ""
	I0422 18:25:02.538849   77929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:02.549686   77929 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:02.549711   77929 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:02.549717   77929 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:02.549794   77929 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:02.560594   77929 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:02.561584   77929 kubeconfig.go:125] found "default-k8s-diff-port-856422" server: "https://192.168.61.206:8444"
	I0422 18:25:02.563656   77929 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:02.575462   77929 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.206
	I0422 18:25:02.575507   77929 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:02.575522   77929 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:02.575606   77929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:02.628012   77929 cri.go:89] found id: ""
	I0422 18:25:02.628080   77929 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:02.645405   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:02.656723   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:02.656751   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:02.656814   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:25:02.667202   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:02.667269   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:02.678303   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:25:02.688600   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:02.688690   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:02.699963   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.710329   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:02.710393   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:02.721188   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:25:02.731964   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:02.732040   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:02.743541   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:02.755030   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:02.870301   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:03.995375   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.125032803s)
	I0422 18:25:03.995447   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.230252   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.302979   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:04.395038   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:04.395115   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:24:59.996437   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:24:59.996984   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:24:59.997018   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:24:59.996926   79197 retry.go:31] will retry after 1.191182438s: waiting for machine to come up
	I0422 18:25:01.190382   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:01.190954   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:01.190990   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:01.190917   79197 retry.go:31] will retry after 1.312288818s: waiting for machine to come up
	I0422 18:25:02.504320   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:02.504783   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:02.504807   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:02.504744   79197 retry.go:31] will retry after 1.553447941s: waiting for machine to come up
	I0422 18:25:04.060300   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:04.060822   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:04.060855   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:04.060778   79197 retry.go:31] will retry after 1.790234912s: waiting for machine to come up
	I0422 18:25:03.502023   77634 pod_ready.go:102] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.353882   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:04.353905   77634 pod_ready.go:81] duration metric: took 14.00823208s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:04.353915   77634 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:06.363356   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:08.363954   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:04.896176   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.396048   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:05.440071   77929 api_server.go:72] duration metric: took 1.045032787s to wait for apiserver process to appear ...
	I0422 18:25:05.440103   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:25:05.440148   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.759542   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.759577   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.759592   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.793255   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:25:08.793294   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:25:08.940652   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:08.945611   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:08.945646   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:09.440292   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.464743   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.464770   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:05.852898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:05.853386   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:05.853413   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:05.853350   79197 retry.go:31] will retry after 2.265221688s: waiting for machine to come up
	I0422 18:25:08.121376   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:08.121797   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:08.121835   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:08.121747   79197 retry.go:31] will retry after 3.098868652s: waiting for machine to come up
	I0422 18:25:09.940470   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:09.946872   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:25:09.946900   77929 api_server.go:103] status: https://192.168.61.206:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:25:10.441291   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:25:10.445834   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:25:10.452788   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:25:10.452814   77929 api_server.go:131] duration metric: took 5.012704724s to wait for apiserver health ...
	I0422 18:25:10.452823   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:25:10.452828   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:10.454695   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:25:10.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:13.361234   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:10.456234   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:25:10.469460   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:25:10.510297   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:25:10.527988   77929 system_pods.go:59] 8 kube-system pods found
	I0422 18:25:10.528034   77929 system_pods.go:61] "coredns-7db6d8ff4d-w968m" [1372c3d4-cb23-4f33-911b-57876688fcd4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:25:10.528044   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [af6c3f45-494d-469b-95e0-3d0842d07a70] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:25:10.528051   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [665925b4-3073-41c2-86c0-12186f079459] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:25:10.528057   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [e8661b67-89c5-43a6-b66e-828f637942e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:25:10.528061   77929 system_pods.go:61] "kube-proxy-4xvx2" [0e662ebe-1f6f-48fe-86c7-595b0bfa4bb6] Running
	I0422 18:25:10.528066   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [e6101593-2ee5-4765-b129-33b3ed7d4c98] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:25:10.528075   77929 system_pods.go:61] "metrics-server-569cc877fc-l5qqw" [85eab808-f1f0-4fbc-9c54-1ae307226243] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:25:10.528079   77929 system_pods.go:61] "storage-provisioner" [ba8465de-babc-4496-809f-68f6ec917ce8] Running
	I0422 18:25:10.528095   77929 system_pods.go:74] duration metric: took 17.768241ms to wait for pod list to return data ...
	I0422 18:25:10.528104   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:25:10.539169   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:25:10.539202   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:25:10.539214   77929 node_conditions.go:105] duration metric: took 11.105847ms to run NodePressure ...
	I0422 18:25:10.539237   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:10.808687   77929 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:25:10.815993   77929 kubeadm.go:733] kubelet initialised
	I0422 18:25:10.816025   77929 kubeadm.go:734] duration metric: took 7.302574ms waiting for restarted kubelet to initialise ...
	I0422 18:25:10.816037   77929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:25:10.824257   77929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:12.837255   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:11.221887   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:11.222319   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | unable to find current IP address of domain old-k8s-version-367072 in network mk-old-k8s-version-367072
	I0422 18:25:11.222358   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | I0422 18:25:11.222277   79197 retry.go:31] will retry after 4.068460973s: waiting for machine to come up
	I0422 18:25:16.704684   77400 start.go:364] duration metric: took 53.475319353s to acquireMachinesLock for "no-preload-407991"
	I0422 18:25:16.704741   77400 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:25:16.704752   77400 fix.go:54] fixHost starting: 
	I0422 18:25:16.705132   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:25:16.705166   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:25:16.721711   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33945
	I0422 18:25:16.722127   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:25:16.722671   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:25:16.722693   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:25:16.723022   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:25:16.723220   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:16.723426   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:25:16.725197   77400 fix.go:112] recreateIfNeeded on no-preload-407991: state=Stopped err=<nil>
	I0422 18:25:16.725231   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	W0422 18:25:16.725430   77400 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:25:16.727275   77400 out.go:177] * Restarting existing kvm2 VM for "no-preload-407991" ...
	I0422 18:25:15.295463   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296039   78377 main.go:141] libmachine: (old-k8s-version-367072) Found IP for machine: 192.168.72.149
	I0422 18:25:15.296072   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has current primary IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.296081   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserving static IP address...
	I0422 18:25:15.296472   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.296493   78377 main.go:141] libmachine: (old-k8s-version-367072) Reserved static IP address: 192.168.72.149
	I0422 18:25:15.296508   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | skip adding static IP to network mk-old-k8s-version-367072 - found existing host DHCP lease matching {name: "old-k8s-version-367072", mac: "52:54:00:82:9f:b2", ip: "192.168.72.149"}
	I0422 18:25:15.296524   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Getting to WaitForSSH function...
	I0422 18:25:15.296537   78377 main.go:141] libmachine: (old-k8s-version-367072) Waiting for SSH to be available...
	I0422 18:25:15.299164   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299527   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.299562   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.299661   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH client type: external
	I0422 18:25:15.299692   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa (-rw-------)
	I0422 18:25:15.299731   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.149 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:15.299745   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | About to run SSH command:
	I0422 18:25:15.299762   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | exit 0
	I0422 18:25:15.431323   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:15.431669   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetConfigRaw
	I0422 18:25:15.432328   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.434829   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435261   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.435293   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.435554   78377 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/config.json ...
	I0422 18:25:15.435765   78377 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:15.435786   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:15.436017   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.438390   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438750   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.438784   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.438910   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.439095   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439314   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.439486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.439666   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.439849   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.439861   78377 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:15.555657   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:15.555686   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.555931   78377 buildroot.go:166] provisioning hostname "old-k8s-version-367072"
	I0422 18:25:15.555962   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.556169   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.558789   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559254   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.559292   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.559331   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.559492   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559641   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.559748   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.559877   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.560055   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.560077   78377 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-367072 && echo "old-k8s-version-367072" | sudo tee /etc/hostname
	I0422 18:25:15.690454   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-367072
	
	I0422 18:25:15.690486   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.693309   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693654   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.693690   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.693952   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.694172   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694390   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.694546   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.694732   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:15.694940   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:15.694960   78377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-367072' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-367072/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-367072' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:15.821039   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:15.821068   78377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:15.821096   78377 buildroot.go:174] setting up certificates
	I0422 18:25:15.821105   78377 provision.go:84] configureAuth start
	I0422 18:25:15.821113   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetMachineName
	I0422 18:25:15.821339   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:15.824209   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824673   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.824710   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.824884   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.827439   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827725   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.827752   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.827907   78377 provision.go:143] copyHostCerts
	I0422 18:25:15.827974   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:15.827987   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:15.828059   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:15.828170   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:15.828181   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:15.828209   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:15.828281   78377 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:15.828291   78377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:15.828317   78377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:15.828411   78377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-367072 san=[127.0.0.1 192.168.72.149 localhost minikube old-k8s-version-367072]
	I0422 18:25:15.967003   78377 provision.go:177] copyRemoteCerts
	I0422 18:25:15.967056   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:15.967082   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:15.969759   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970152   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:15.970189   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:15.970419   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:15.970600   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:15.970750   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:15.970903   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.058600   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:16.088368   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:25:16.119116   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:16.145380   78377 provision.go:87] duration metric: took 324.262342ms to configureAuth
	I0422 18:25:16.145416   78377 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:16.145651   78377 config.go:182] Loaded profile config "old-k8s-version-367072": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:25:16.145736   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.148776   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149221   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.149251   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.149449   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.149624   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149789   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.149947   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.150116   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.150295   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.150313   78377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:16.448112   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:16.448141   78377 machine.go:97] duration metric: took 1.012360153s to provisionDockerMachine
	I0422 18:25:16.448154   78377 start.go:293] postStartSetup for "old-k8s-version-367072" (driver="kvm2")
	I0422 18:25:16.448166   78377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:16.448188   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.448508   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:16.448541   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.451479   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.451874   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.451898   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.452170   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.452373   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.452576   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.452773   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.543300   78377 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:16.549385   78377 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:16.549409   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:16.549473   78377 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:16.549590   78377 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:16.549727   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:16.560863   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:16.585861   78377 start.go:296] duration metric: took 137.693932ms for postStartSetup
	I0422 18:25:16.585911   78377 fix.go:56] duration metric: took 20.77354305s for fixHost
	I0422 18:25:16.585931   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.588815   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589234   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.589263   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.589495   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.589713   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.589877   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.590039   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.590245   78377 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:16.590396   78377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.149 22 <nil> <nil>}
	I0422 18:25:16.590406   78377 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:16.704537   78377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810316.682617297
	
	I0422 18:25:16.704559   78377 fix.go:216] guest clock: 1713810316.682617297
	I0422 18:25:16.704569   78377 fix.go:229] Guest: 2024-04-22 18:25:16.682617297 +0000 UTC Remote: 2024-04-22 18:25:16.585915688 +0000 UTC m=+211.981005523 (delta=96.701609ms)
	I0422 18:25:16.704592   78377 fix.go:200] guest clock delta is within tolerance: 96.701609ms
	I0422 18:25:16.704600   78377 start.go:83] releasing machines lock for "old-k8s-version-367072", held for 20.892277591s
	I0422 18:25:16.704631   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.704920   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:16.707837   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708205   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.708230   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.708427   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.708994   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709163   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .DriverName
	I0422 18:25:16.709240   78377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:16.709279   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.709342   78377 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:16.709364   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHHostname
	I0422 18:25:16.712025   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712216   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712450   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712498   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712566   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.712674   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:16.712720   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:16.712722   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.712857   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.712945   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHPort
	I0422 18:25:16.713038   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.713101   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHKeyPath
	I0422 18:25:16.713240   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetSSHUsername
	I0422 18:25:16.713370   78377 sshutil.go:53] new ssh client: &{IP:192.168.72.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/old-k8s-version-367072/id_rsa Username:docker}
	I0422 18:25:16.804499   78377 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:16.836596   78377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:16.993049   78377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:17.000275   78377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:17.000346   78377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:17.023327   78377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:17.023351   78377 start.go:494] detecting cgroup driver to use...
	I0422 18:25:17.023425   78377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:17.045320   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:17.061622   78377 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:17.061692   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:17.078768   78377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:17.094562   78377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:17.221702   78377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:17.390374   78377 docker.go:233] disabling docker service ...
	I0422 18:25:17.390449   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:17.409352   78377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:17.425491   78377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:17.582359   78377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:17.735691   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:17.752812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:17.777437   78377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0422 18:25:17.777495   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.789378   78377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:17.789441   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.801159   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.813702   78377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:17.825938   78377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:17.841552   78377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:17.852365   78377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:17.852455   78377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:17.870233   78377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:17.882139   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:18.021505   78377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:18.179583   78377 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:18.179677   78377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:18.185047   78377 start.go:562] Will wait 60s for crictl version
	I0422 18:25:18.185105   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:18.189079   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:18.227533   78377 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:18.227643   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.260147   78377 ssh_runner.go:195] Run: crio --version
	I0422 18:25:18.297011   78377 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0422 18:25:15.362667   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:17.861622   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:14.831683   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:14.831706   77929 pod_ready.go:81] duration metric: took 4.007420508s for pod "coredns-7db6d8ff4d-w968m" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:14.831715   77929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343025   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:16.343056   77929 pod_ready.go:81] duration metric: took 1.511333532s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:16.343070   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351244   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:17.351267   77929 pod_ready.go:81] duration metric: took 1.008189798s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:17.351280   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:19.365025   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:18.298407   78377 main.go:141] libmachine: (old-k8s-version-367072) Calling .GetIP
	I0422 18:25:18.301613   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302026   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:9f:b2", ip: ""} in network mk-old-k8s-version-367072: {Iface:virbr4 ExpiryTime:2024-04-22 19:25:08 +0000 UTC Type:0 Mac:52:54:00:82:9f:b2 Iaid: IPaddr:192.168.72.149 Prefix:24 Hostname:old-k8s-version-367072 Clientid:01:52:54:00:82:9f:b2}
	I0422 18:25:18.302057   78377 main.go:141] libmachine: (old-k8s-version-367072) DBG | domain old-k8s-version-367072 has defined IP address 192.168.72.149 and MAC address 52:54:00:82:9f:b2 in network mk-old-k8s-version-367072
	I0422 18:25:18.302317   78377 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:18.307249   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:18.321575   78377 kubeadm.go:877] updating cluster {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:18.321721   78377 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 18:25:18.321767   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:18.382066   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:18.382133   78377 ssh_runner.go:195] Run: which lz4
	I0422 18:25:18.387080   78377 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0422 18:25:18.392576   78377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0422 18:25:18.392613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0422 18:25:16.728745   77400 main.go:141] libmachine: (no-preload-407991) Calling .Start
	I0422 18:25:16.728946   77400 main.go:141] libmachine: (no-preload-407991) Ensuring networks are active...
	I0422 18:25:16.729604   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network default is active
	I0422 18:25:16.729979   77400 main.go:141] libmachine: (no-preload-407991) Ensuring network mk-no-preload-407991 is active
	I0422 18:25:16.730458   77400 main.go:141] libmachine: (no-preload-407991) Getting domain xml...
	I0422 18:25:16.731314   77400 main.go:141] libmachine: (no-preload-407991) Creating domain...
	I0422 18:25:18.079763   77400 main.go:141] libmachine: (no-preload-407991) Waiting to get IP...
	I0422 18:25:18.080862   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.081371   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.081401   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.081340   79353 retry.go:31] will retry after 226.494122ms: waiting for machine to come up
	I0422 18:25:18.309499   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.309914   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.310019   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.309900   79353 retry.go:31] will retry after 375.374338ms: waiting for machine to come up
	I0422 18:25:18.686507   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:18.687064   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:18.687093   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:18.687018   79353 retry.go:31] will retry after 341.714326ms: waiting for machine to come up
	I0422 18:25:19.030772   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.031261   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.031290   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.031229   79353 retry.go:31] will retry after 388.101939ms: waiting for machine to come up
	I0422 18:25:19.420994   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:19.421478   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:19.421500   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:19.421397   79353 retry.go:31] will retry after 732.485222ms: waiting for machine to come up
	I0422 18:25:20.155887   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:20.156717   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:20.156750   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:20.156665   79353 retry.go:31] will retry after 950.207106ms: waiting for machine to come up
	I0422 18:25:19.878966   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.364111   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:21.859384   77929 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:22.362519   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.362552   77929 pod_ready.go:81] duration metric: took 5.011264858s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.362566   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371087   77929 pod_ready.go:92] pod "kube-proxy-4xvx2" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.371112   77929 pod_ready.go:81] duration metric: took 8.534129ms for pod "kube-proxy-4xvx2" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.371142   77929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376156   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:25:22.376183   77929 pod_ready.go:81] duration metric: took 5.03143ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:22.376196   77929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	I0422 18:25:24.385435   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:20.319994   78377 crio.go:462] duration metric: took 1.932984536s to copy over tarball
	I0422 18:25:20.320076   78377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0422 18:25:23.622384   78377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.30227916s)
	I0422 18:25:23.622411   78377 crio.go:469] duration metric: took 3.302385661s to extract the tarball
	I0422 18:25:23.622419   78377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0422 18:25:23.678794   78377 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:23.720105   78377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0422 18:25:23.720138   78377 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:23.720191   78377 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.720221   78377 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.720264   78377 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.720285   78377 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0422 18:25:23.720310   78377 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.720396   78377 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.720464   78377 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.720244   78377 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721865   78377 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.721895   78377 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:23.721911   78377 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:23.721925   78377 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.721986   78377 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.722013   78377 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.722040   78377 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:23.722415   78377 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0422 18:25:23.947080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0422 18:25:23.956532   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:23.969401   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:23.975080   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:23.977902   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0422 18:25:23.987657   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.091349   78377 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0422 18:25:24.091415   78377 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0422 18:25:24.091473   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091508   78377 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0422 18:25:24.091564   78377 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.091612   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.091773   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.112708   78377 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0422 18:25:24.112758   78377 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.112807   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.156371   78377 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0422 18:25:24.156420   78377 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.156476   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209420   78377 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0422 18:25:24.209468   78377 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.209467   78377 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0422 18:25:24.209504   78377 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.209519   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209533   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.209580   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0422 18:25:24.209613   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0422 18:25:24.209666   78377 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0422 18:25:24.209697   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0422 18:25:24.209700   78377 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.209721   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0422 18:25:24.209750   78377 ssh_runner.go:195] Run: which crictl
	I0422 18:25:24.319159   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0422 18:25:24.319265   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0422 18:25:24.319294   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0422 18:25:24.319374   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0422 18:25:24.319453   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0422 18:25:24.319532   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0422 18:25:24.319575   78377 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0422 18:25:24.406665   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0422 18:25:24.406699   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0422 18:25:24.406776   78377 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0422 18:25:24.581672   78377 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:21.108444   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:21.109056   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:21.109082   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:21.109004   79353 retry.go:31] will retry after 958.250136ms: waiting for machine to come up
	I0422 18:25:22.069541   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:22.070120   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:22.070144   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:22.070036   79353 retry.go:31] will retry after 989.607679ms: waiting for machine to come up
	I0422 18:25:23.061351   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:23.061877   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:23.061908   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:23.061823   79353 retry.go:31] will retry after 1.451989455s: waiting for machine to come up
	I0422 18:25:24.515233   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:24.515730   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:24.515755   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:24.515686   79353 retry.go:31] will retry after 2.303903602s: waiting for machine to come up
	I0422 18:25:24.365508   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.861066   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:26.389132   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:28.883625   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:24.724445   78377 cache_images.go:92] duration metric: took 1.004285991s to LoadCachedImages
	W0422 18:25:24.894312   78377 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0422 18:25:24.894361   78377 kubeadm.go:928] updating node { 192.168.72.149 8443 v1.20.0 crio true true} ...
	I0422 18:25:24.894488   78377 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-367072 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:24.894582   78377 ssh_runner.go:195] Run: crio config
	I0422 18:25:24.951231   78377 cni.go:84] Creating CNI manager for ""
	I0422 18:25:24.951266   78377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:24.951282   78377 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:24.951305   78377 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.149 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-367072 NodeName:old-k8s-version-367072 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:25:24.951495   78377 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-367072"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:24.951570   78377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:25:24.964466   78377 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:24.964547   78377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:24.976092   78377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0422 18:25:24.995716   78377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:25.014159   78377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0422 18:25:25.036255   78377 ssh_runner.go:195] Run: grep 192.168.72.149	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:25.040649   78377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:25.055323   78377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:25.186492   78377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:25.208819   78377 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072 for IP: 192.168.72.149
	I0422 18:25:25.208862   78377 certs.go:194] generating shared ca certs ...
	I0422 18:25:25.208882   78377 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.209089   78377 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:25.209144   78377 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:25.209155   78377 certs.go:256] generating profile certs ...
	I0422 18:25:25.209307   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/client.key
	I0422 18:25:25.209376   78377 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key.653b7478
	I0422 18:25:25.209438   78377 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key
	I0422 18:25:25.209584   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:25.209623   78377 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:25.209632   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:25.209664   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:25.209701   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:25.209738   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:25.209791   78377 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:25.210613   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:25.262071   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:25.298556   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:25.331614   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:25.368285   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:25:25.403290   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:25:25.441081   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:25.487498   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/old-k8s-version-367072/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:25:25.522482   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:25.549945   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:25.578991   78377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:25.608935   78377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:25.629179   78377 ssh_runner.go:195] Run: openssl version
	I0422 18:25:25.636149   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:25.648693   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653465   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.653534   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:25.659701   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:25.671984   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:25.684361   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689344   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.689410   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:25.695648   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:25.708266   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:25.721991   78377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726808   78377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.726872   78377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:25.732974   78377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:25.749380   78377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:25.754517   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:25.761538   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:25.768472   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:25.775728   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:25.782337   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:25.788885   78377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:25.795677   78377 kubeadm.go:391] StartCluster: {Name:old-k8s-version-367072 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-367072 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.149 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:25.795771   78377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:25.795839   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.837381   78377 cri.go:89] found id: ""
	I0422 18:25:25.837437   78377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:25.848554   78377 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:25.848574   78377 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:25.848579   78377 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:25.848625   78377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:25.860204   78377 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:25.861212   78377 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-367072" does not appear in /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:25:25.861884   78377 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-11572/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-367072" cluster setting kubeconfig missing "old-k8s-version-367072" context setting]
	I0422 18:25:25.862851   78377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:25.864562   78377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:25.875151   78377 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.149
	I0422 18:25:25.875182   78377 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:25.875193   78377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:25.875255   78377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:25.915872   78377 cri.go:89] found id: ""
	I0422 18:25:25.915982   78377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:25:25.934776   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:25:25.946299   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:25:25.946326   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:25:25.946378   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:25:25.957495   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:25:25.957578   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:25:25.968843   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:25:25.981829   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:25:25.981909   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:25:25.995318   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.009567   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:25:26.009630   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:25:26.024306   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:25:26.036008   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:25:26.036075   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:25:26.046594   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:25:26.057056   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:26.207676   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.085460   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.324735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.431848   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:27.541157   78377 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:25:27.541254   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.042131   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:28.542270   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.041887   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:29.542069   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:26.821539   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:26.822006   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:26.822033   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:26.821950   79353 retry.go:31] will retry after 1.870697225s: waiting for machine to come up
	I0422 18:25:28.695072   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:28.695420   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:28.695466   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:28.695386   79353 retry.go:31] will retry after 2.327485176s: waiting for machine to come up
	I0422 18:25:28.861976   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:31.361339   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.883801   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:33.389422   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:30.041985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:30.541653   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.041304   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.542040   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.042024   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:32.541622   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.041428   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:33.541675   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.041841   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:34.541705   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:31.024382   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:31.024817   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:31.024845   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:31.024786   79353 retry.go:31] will retry after 2.767538103s: waiting for machine to come up
	I0422 18:25:33.794390   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:33.794834   77400 main.go:141] libmachine: (no-preload-407991) DBG | unable to find current IP address of domain no-preload-407991 in network mk-no-preload-407991
	I0422 18:25:33.794872   77400 main.go:141] libmachine: (no-preload-407991) DBG | I0422 18:25:33.794808   79353 retry.go:31] will retry after 5.661373675s: waiting for machine to come up
	I0422 18:25:33.860276   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.861770   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:38.361316   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.883098   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:37.883749   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:35.041898   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:35.541499   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.041443   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:36.542150   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.042296   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:37.542002   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.041367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:38.541518   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.041471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.542025   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:39.457864   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458407   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has current primary IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.458447   77400 main.go:141] libmachine: (no-preload-407991) Found IP for machine: 192.168.39.164
	I0422 18:25:39.458492   77400 main.go:141] libmachine: (no-preload-407991) Reserving static IP address...
	I0422 18:25:39.458954   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.458980   77400 main.go:141] libmachine: (no-preload-407991) DBG | skip adding static IP to network mk-no-preload-407991 - found existing host DHCP lease matching {name: "no-preload-407991", mac: "52:54:00:a4:e4:a0", ip: "192.168.39.164"}
	I0422 18:25:39.458992   77400 main.go:141] libmachine: (no-preload-407991) Reserved static IP address: 192.168.39.164
	I0422 18:25:39.459012   77400 main.go:141] libmachine: (no-preload-407991) Waiting for SSH to be available...
	I0422 18:25:39.459027   77400 main.go:141] libmachine: (no-preload-407991) DBG | Getting to WaitForSSH function...
	I0422 18:25:39.461404   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461715   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.461746   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.461875   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH client type: external
	I0422 18:25:39.461906   77400 main.go:141] libmachine: (no-preload-407991) DBG | Using SSH private key: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa (-rw-------)
	I0422 18:25:39.461956   77400 main.go:141] libmachine: (no-preload-407991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.164 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0422 18:25:39.461974   77400 main.go:141] libmachine: (no-preload-407991) DBG | About to run SSH command:
	I0422 18:25:39.461992   77400 main.go:141] libmachine: (no-preload-407991) DBG | exit 0
	I0422 18:25:39.591446   77400 main.go:141] libmachine: (no-preload-407991) DBG | SSH cmd err, output: <nil>: 
	I0422 18:25:39.591795   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetConfigRaw
	I0422 18:25:39.592473   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.594928   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595379   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.595414   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.595632   77400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/config.json ...
	I0422 18:25:39.595890   77400 machine.go:94] provisionDockerMachine start ...
	I0422 18:25:39.595914   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:39.596103   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.598532   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.598899   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.598929   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.599071   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.599270   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599450   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.599592   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.599728   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.599927   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.599942   77400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:25:39.712043   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0422 18:25:39.712081   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712336   77400 buildroot.go:166] provisioning hostname "no-preload-407991"
	I0422 18:25:39.712363   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.712548   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.715474   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.715936   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.715960   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.716089   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.716265   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716396   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.716530   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.716656   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.716860   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.716874   77400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-407991 && echo "no-preload-407991" | sudo tee /etc/hostname
	I0422 18:25:39.845921   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-407991
	
	I0422 18:25:39.845959   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.848790   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849093   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.849121   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.849288   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:39.849495   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849638   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:39.849817   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:39.850014   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:39.850183   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:39.850200   77400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-407991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-407991/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-407991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:25:39.977389   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:25:39.977427   77400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18706-11572/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-11572/.minikube}
	I0422 18:25:39.977447   77400 buildroot.go:174] setting up certificates
	I0422 18:25:39.977456   77400 provision.go:84] configureAuth start
	I0422 18:25:39.977468   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetMachineName
	I0422 18:25:39.977754   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:39.980800   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981266   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.981305   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.981458   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:39.984031   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984478   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:39.984510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:39.984654   77400 provision.go:143] copyHostCerts
	I0422 18:25:39.984713   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem, removing ...
	I0422 18:25:39.984725   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem
	I0422 18:25:39.984788   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/ca.pem (1078 bytes)
	I0422 18:25:39.984907   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem, removing ...
	I0422 18:25:39.984918   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem
	I0422 18:25:39.984952   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/cert.pem (1123 bytes)
	I0422 18:25:39.985038   77400 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem, removing ...
	I0422 18:25:39.985048   77400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem
	I0422 18:25:39.985076   77400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-11572/.minikube/key.pem (1675 bytes)
	I0422 18:25:39.985158   77400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem org=jenkins.no-preload-407991 san=[127.0.0.1 192.168.39.164 localhost minikube no-preload-407991]
	I0422 18:25:40.224235   77400 provision.go:177] copyRemoteCerts
	I0422 18:25:40.224306   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:25:40.224352   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.227355   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.227814   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.227842   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.228035   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.228232   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.228392   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.228560   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.318916   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:25:40.346168   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:25:40.371490   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:25:40.396866   77400 provision.go:87] duration metric: took 419.381117ms to configureAuth
	I0422 18:25:40.396899   77400 buildroot.go:189] setting minikube options for container-runtime
	I0422 18:25:40.397067   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:25:40.397130   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.399642   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400060   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.400095   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.400269   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.400466   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400652   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.400832   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.401018   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.401176   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.401191   77400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0422 18:25:40.698107   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0422 18:25:40.698140   77400 machine.go:97] duration metric: took 1.102235221s to provisionDockerMachine
	I0422 18:25:40.698153   77400 start.go:293] postStartSetup for "no-preload-407991" (driver="kvm2")
	I0422 18:25:40.698171   77400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:25:40.698187   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.698497   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:25:40.698532   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.701545   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.701933   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.701964   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.702070   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.702295   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.702492   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.702727   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.800538   77400 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:25:40.805027   77400 info.go:137] Remote host: Buildroot 2023.02.9
	I0422 18:25:40.805060   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/addons for local assets ...
	I0422 18:25:40.805133   77400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-11572/.minikube/files for local assets ...
	I0422 18:25:40.805216   77400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem -> 188842.pem in /etc/ssl/certs
	I0422 18:25:40.805304   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:25:40.816872   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:40.843857   77400 start.go:296] duration metric: took 145.69044ms for postStartSetup
	I0422 18:25:40.843896   77400 fix.go:56] duration metric: took 24.13914409s for fixHost
	I0422 18:25:40.843914   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.846770   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847148   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.847184   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.847391   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.847605   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847778   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.847966   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.848199   77400 main.go:141] libmachine: Using SSH client type: native
	I0422 18:25:40.848382   77400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.164 22 <nil> <nil>}
	I0422 18:25:40.848396   77400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0422 18:25:40.964440   77400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713810340.939149386
	
	I0422 18:25:40.964473   77400 fix.go:216] guest clock: 1713810340.939149386
	I0422 18:25:40.964483   77400 fix.go:229] Guest: 2024-04-22 18:25:40.939149386 +0000 UTC Remote: 2024-04-22 18:25:40.843899302 +0000 UTC m=+360.205454093 (delta=95.250084ms)
	I0422 18:25:40.964508   77400 fix.go:200] guest clock delta is within tolerance: 95.250084ms
	I0422 18:25:40.964513   77400 start.go:83] releasing machines lock for "no-preload-407991", held for 24.259798286s
	I0422 18:25:40.964535   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.964813   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:40.967510   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.967906   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.967932   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.968087   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968610   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968782   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:25:40.968866   77400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:25:40.968910   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.969047   77400 ssh_runner.go:195] Run: cat /version.json
	I0422 18:25:40.969074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:25:40.971818   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972039   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972190   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972203   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972394   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972565   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:40.972580   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972594   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:40.972733   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:25:40.972791   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.972875   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:25:40.972948   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:40.973062   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:25:40.973206   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:25:41.092004   77400 ssh_runner.go:195] Run: systemctl --version
	I0422 18:25:41.098574   77400 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0422 18:25:41.242800   77400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0422 18:25:41.250454   77400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0422 18:25:41.250521   77400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:25:41.267380   77400 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:25:41.267408   77400 start.go:494] detecting cgroup driver to use...
	I0422 18:25:41.267478   77400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0422 18:25:41.284742   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0422 18:25:41.299527   77400 docker.go:217] disabling cri-docker service (if available) ...
	I0422 18:25:41.299596   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0422 18:25:41.314189   77400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0422 18:25:41.329444   77400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0422 18:25:41.456719   77400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0422 18:25:41.628305   77400 docker.go:233] disabling docker service ...
	I0422 18:25:41.628376   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0422 18:25:41.643226   77400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0422 18:25:41.657578   77400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0422 18:25:41.780449   77400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0422 18:25:41.898823   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0422 18:25:41.913578   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:25:41.933621   77400 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0422 18:25:41.933679   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.944309   77400 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0422 18:25:41.944382   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.955308   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.966445   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:41.977509   77400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:25:41.989479   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.001915   77400 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.020554   77400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0422 18:25:42.033225   77400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:25:42.044177   77400 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0422 18:25:42.044231   77400 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0422 18:25:42.060403   77400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:25:42.071760   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:42.213747   77400 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0422 18:25:42.361818   77400 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0422 18:25:42.361911   77400 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0422 18:25:42.367211   77400 start.go:562] Will wait 60s for crictl version
	I0422 18:25:42.367265   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.371042   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:25:42.408686   77400 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0422 18:25:42.408773   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.438447   77400 ssh_runner.go:195] Run: crio --version
	I0422 18:25:42.469117   77400 out.go:177] * Preparing Kubernetes v1.30.0 on CRI-O 1.29.1 ...
	I0422 18:25:40.862849   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.361826   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:39.884361   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:41.885199   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:43.885865   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:40.041777   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:40.541411   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.041834   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:41.542328   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.042211   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.542008   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.041844   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:43.542121   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.041564   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:44.541344   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:42.470665   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetIP
	I0422 18:25:42.473467   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.473845   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:25:42.473871   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:25:42.474121   77400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0422 18:25:42.478401   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:42.491034   77400 kubeadm.go:877] updating cluster {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:25:42.491163   77400 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 18:25:42.491203   77400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0422 18:25:42.530418   77400 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0". assuming images are not preloaded.
	I0422 18:25:42.530443   77400 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.530533   77400 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.530585   77400 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.530641   77400 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0422 18:25:42.530601   77400 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.530609   77400 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.530622   77400 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.530626   77400 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532108   77400 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.532136   77400 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0422 18:25:42.532111   77400 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.532113   77400 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.532175   77400 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.532197   77400 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:42.532223   77400 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.532506   77400 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.735366   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.750777   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0422 18:25:42.758260   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.759633   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.763447   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.765416   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.803799   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.832904   77400 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0" does not exist at hash "a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b" in container runtime
	I0422 18:25:42.832959   77400 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:42.833021   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981471   77400 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0422 18:25:42.981528   77400 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:42.981553   77400 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0" does not exist at hash "259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced" in container runtime
	I0422 18:25:42.981584   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981592   77400 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:42.981635   77400 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0422 18:25:42.981663   77400 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:42.981687   77400 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0" does not exist at hash "c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b" in container runtime
	I0422 18:25:42.981699   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981642   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981716   77400 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:42.981770   77400 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0" does not exist at hash "c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0" in container runtime
	I0422 18:25:42.981776   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981788   77400 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:42.981820   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:42.981846   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0
	I0422 18:25:43.021364   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0
	I0422 18:25:43.021416   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0
	I0422 18:25:43.021455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.021460   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0
	I0422 18:25:43.021529   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0422 18:25:43.021534   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0
	I0422 18:25:43.021585   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0422 18:25:43.130300   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0422 18:25:43.130373   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0
	I0422 18:25:43.130408   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:43.130425   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0
	I0422 18:25:43.130455   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:43.130514   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:43.134769   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0422 18:25:43.134785   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0 (exists)
	I0422 18:25:43.134797   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134839   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0
	I0422 18:25:43.134853   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:43.134882   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0
	I0422 18:25:43.134959   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:43.142273   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0 (exists)
	I0422 18:25:43.142486   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0422 18:25:43.142837   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0 (exists)
	I0422 18:25:43.840108   77400 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210614   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.075740127s)
	I0422 18:25:45.210650   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0422 18:25:45.210655   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0: (2.075789371s)
	I0422 18:25:45.210676   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0 from cache
	I0422 18:25:45.210693   77400 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.075715404s)
	I0422 18:25:45.210699   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210706   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0 (exists)
	I0422 18:25:45.210748   77400 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.370610047s)
	I0422 18:25:45.210785   77400 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0422 18:25:45.210750   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0
	I0422 18:25:45.210842   77400 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:45.210969   77400 ssh_runner.go:195] Run: which crictl
	I0422 18:25:45.363082   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:47.861802   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:46.383938   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:48.385209   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:45.042273   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:45.541576   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.041447   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:46.541920   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.042364   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:47.541813   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.042362   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.541320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.041845   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:49.542204   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:48.203063   77400 ssh_runner.go:235] Completed: which crictl: (2.992066474s)
	I0422 18:25:48.203106   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0: (2.992228832s)
	I0422 18:25:48.203143   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0 from cache
	I0422 18:25:48.203159   77400 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:25:48.203171   77400 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:48.203210   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0422 18:25:49.863963   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:52.370507   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.883608   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:53.386229   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:50.042263   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:50.541538   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.042055   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:51.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.041479   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.542313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.041554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:53.541500   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.042153   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:54.541953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:52.419429   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (4.216195193s)
	I0422 18:25:52.419462   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0422 18:25:52.419474   77400 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.216288559s)
	I0422 18:25:52.419488   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419513   77400 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0422 18:25:52.419537   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0
	I0422 18:25:52.419581   77400 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:52.424638   77400 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0422 18:25:53.873720   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0: (1.454157304s)
	I0422 18:25:53.873750   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0 from cache
	I0422 18:25:53.873780   77400 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:53.873825   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0422 18:25:54.860810   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:56.864272   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.388103   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:57.887970   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:25:55.041393   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.541470   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.042188   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:56.541734   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.042041   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:57.541540   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.041682   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:58.542178   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.042125   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:59.542154   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:25:55.955181   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.081308071s)
	I0422 18:25:55.955210   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0422 18:25:55.955236   77400 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:55.955300   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0
	I0422 18:25:58.218734   77400 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0: (2.263410883s)
	I0422 18:25:58.218762   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0 from cache
	I0422 18:25:58.218792   77400 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:58.218843   77400 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0422 18:25:59.071398   77400 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18706-11572/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0422 18:25:59.071443   77400 cache_images.go:123] Successfully loaded all cached images
	I0422 18:25:59.071450   77400 cache_images.go:92] duration metric: took 16.54097573s to LoadCachedImages
	I0422 18:25:59.071463   77400 kubeadm.go:928] updating node { 192.168.39.164 8443 v1.30.0 crio true true} ...
	I0422 18:25:59.071610   77400 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-407991 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.164
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:25:59.071698   77400 ssh_runner.go:195] Run: crio config
	I0422 18:25:59.125757   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:25:59.125783   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:25:59.125800   77400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:25:59.125832   77400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.164 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-407991 NodeName:no-preload-407991 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.164"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.164 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:25:59.126001   77400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.164
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-407991"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.164
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.164"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:25:59.126073   77400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:25:59.137254   77400 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:25:59.137320   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:25:59.146983   77400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0422 18:25:59.165207   77400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:25:59.182898   77400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0422 18:25:59.201735   77400 ssh_runner.go:195] Run: grep 192.168.39.164	control-plane.minikube.internal$ /etc/hosts
	I0422 18:25:59.206108   77400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.164	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:25:59.219642   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:25:59.336565   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:25:59.356844   77400 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991 for IP: 192.168.39.164
	I0422 18:25:59.356873   77400 certs.go:194] generating shared ca certs ...
	I0422 18:25:59.356893   77400 certs.go:226] acquiring lock for ca certs: {Name:mk388d3dc4a0e77f8669c3ec42dbe16768d0150c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:25:59.357058   77400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key
	I0422 18:25:59.357121   77400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key
	I0422 18:25:59.357133   77400 certs.go:256] generating profile certs ...
	I0422 18:25:59.357209   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.key
	I0422 18:25:59.357329   77400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key.6aa1268b
	I0422 18:25:59.357413   77400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key
	I0422 18:25:59.357574   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem (1338 bytes)
	W0422 18:25:59.357616   77400 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884_empty.pem, impossibly tiny 0 bytes
	I0422 18:25:59.357631   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:25:59.357672   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:25:59.357707   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:25:59.357745   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/certs/key.pem (1675 bytes)
	I0422 18:25:59.357823   77400 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem (1708 bytes)
	I0422 18:25:59.358765   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:25:59.395982   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:25:59.430445   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:25:59.465415   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0422 18:25:59.502678   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:25:59.538225   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0422 18:25:59.570635   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:25:59.596096   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 18:25:59.622051   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:25:59.647372   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/certs/18884.pem --> /usr/share/ca-certificates/18884.pem (1338 bytes)
	I0422 18:25:59.673650   77400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/ssl/certs/188842.pem --> /usr/share/ca-certificates/188842.pem (1708 bytes)
	I0422 18:25:59.699515   77400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:25:59.717253   77400 ssh_runner.go:195] Run: openssl version
	I0422 18:25:59.723704   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/188842.pem && ln -fs /usr/share/ca-certificates/188842.pem /etc/ssl/certs/188842.pem"
	I0422 18:25:59.735265   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740264   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:08 /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.740319   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/188842.pem
	I0422 18:25:59.746445   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/188842.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:25:59.757879   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:25:59.769243   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774505   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:58 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.774562   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:25:59.780572   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:25:59.793472   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18884.pem && ln -fs /usr/share/ca-certificates/18884.pem /etc/ssl/certs/18884.pem"
	I0422 18:25:59.805187   77400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810148   77400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:08 /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.810191   77400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18884.pem
	I0422 18:25:59.816350   77400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18884.pem /etc/ssl/certs/51391683.0"
	I0422 18:25:59.828208   77400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:25:59.832799   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:25:59.838952   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:25:59.845145   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:25:59.851309   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:25:59.857643   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:25:59.864892   77400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:25:59.873625   77400 kubeadm.go:391] StartCluster: {Name:no-preload-407991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0 ClusterName:no-preload-407991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:25:59.873749   77400 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0422 18:25:59.873826   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.913578   77400 cri.go:89] found id: ""
	I0422 18:25:59.913656   77400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:25:59.925105   77400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:25:59.925131   77400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:25:59.925138   77400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:25:59.925192   77400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:25:59.935942   77400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:25:59.937363   77400 kubeconfig.go:125] found "no-preload-407991" server: "https://192.168.39.164:8443"
	I0422 18:25:59.939672   77400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:25:59.949774   77400 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.164
	I0422 18:25:59.949810   77400 kubeadm.go:1154] stopping kube-system containers ...
	I0422 18:25:59.949841   77400 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0422 18:25:59.949896   77400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0422 18:25:59.989385   77400 cri.go:89] found id: ""
	I0422 18:25:59.989443   77400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0422 18:26:00.005985   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:26:00.016873   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:26:00.016897   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:26:00.016953   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:26:00.027119   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:26:00.027205   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:26:00.038360   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:26:00.048176   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:26:00.048246   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:26:00.058861   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.068955   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:26:00.069018   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:26:00.079147   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:26:00.089400   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:26:00.089477   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:26:00.100245   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:26:00.111040   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:00.224436   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:25:59.362215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:01.860196   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.388433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:02.883211   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:00.042114   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.542138   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.042285   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.542226   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.041310   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.541432   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.041406   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:03.542306   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.042010   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:04.541508   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:00.838456   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.057201   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.143346   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:01.294896   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:26:01.295031   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:01.795945   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.296085   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:02.324434   77400 api_server.go:72] duration metric: took 1.029539423s to wait for apiserver process to appear ...
	I0422 18:26:02.324467   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:26:02.324490   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.784948   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0422 18:26:04.784984   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0422 18:26:04.784997   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.844019   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.844064   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:04.844084   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:04.848805   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:04.848838   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.325458   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.332351   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.332410   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:05.824785   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:05.830293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0422 18:26:05.830318   77400 api_server.go:103] status: https://192.168.39.164:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0422 18:26:06.325380   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:26:06.332804   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:26:06.344083   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:26:06.344110   77400 api_server.go:131] duration metric: took 4.019636154s to wait for apiserver health ...
	I0422 18:26:06.344118   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:26:06.344123   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:26:06.345875   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:26:03.863020   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:06.360428   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:04.884648   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:07.382356   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:09.388391   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:05.041961   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:05.541723   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.041954   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.541963   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.041378   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:07.541879   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.041942   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:08.541357   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.041425   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:09.541474   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:06.347812   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:26:06.361087   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:26:06.385654   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:26:06.398331   77400 system_pods.go:59] 8 kube-system pods found
	I0422 18:26:06.398372   77400 system_pods.go:61] "coredns-7db6d8ff4d-2p2sr" [3f42ce46-e76d-4bc8-9dd5-463a08948e4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0422 18:26:06.398384   77400 system_pods.go:61] "etcd-no-preload-407991" [96ae7feb-802f-44a8-81fc-5ea5de12e73b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0422 18:26:06.398396   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [28010e33-49a1-4c6b-90f9-939ede3ed97e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0422 18:26:06.398404   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [1e7db029-2196-499f-bc88-d780d065f80c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0422 18:26:06.398415   77400 system_pods.go:61] "kube-proxy-767q4" [1c6d01b0-caf0-4d52-8da8-caad7b158012] Running
	I0422 18:26:06.398426   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [3ef8d145-d90e-455d-98fe-de9e6080a178] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0422 18:26:06.398433   77400 system_pods.go:61] "metrics-server-569cc877fc-jmjhm" [d831b01b-af2e-4c7f-944c-e768d724ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:26:06.398439   77400 system_pods.go:61] "storage-provisioner" [db8196df-a394-4e10-9db7-c10414833af3] Running
	I0422 18:26:06.398447   77400 system_pods.go:74] duration metric: took 12.770066ms to wait for pod list to return data ...
	I0422 18:26:06.398455   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:26:06.402125   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:26:06.402158   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:26:06.402170   77400 node_conditions.go:105] duration metric: took 3.709194ms to run NodePressure ...
	I0422 18:26:06.402195   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0422 18:26:06.676133   77400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680247   77400 kubeadm.go:733] kubelet initialised
	I0422 18:26:06.680269   77400 kubeadm.go:734] duration metric: took 4.114413ms waiting for restarted kubelet to initialise ...
	I0422 18:26:06.680276   77400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:26:06.687275   77400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.693967   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.693986   77400 pod_ready.go:81] duration metric: took 6.687466ms for pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.694004   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "coredns-7db6d8ff4d-2p2sr" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.694012   77400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.698539   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698562   77400 pod_ready.go:81] duration metric: took 4.539271ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.698571   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "etcd-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.698578   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.703382   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703407   77400 pod_ready.go:81] duration metric: took 4.822601ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.703418   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-apiserver-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.703425   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:06.789413   77400 pod_ready.go:97] node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789449   77400 pod_ready.go:81] duration metric: took 86.014056ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	E0422 18:26:06.789459   77400 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-407991" hosting pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-407991" has status "Ready":"False"
	I0422 18:26:06.789465   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189544   77400 pod_ready.go:92] pod "kube-proxy-767q4" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:07.189572   77400 pod_ready.go:81] duration metric: took 400.096716ms for pod "kube-proxy-767q4" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:07.189585   77400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:09.201757   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:08.861714   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.359820   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.362303   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:11.883726   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:14.382966   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:10.041640   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:10.541360   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.042045   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.542018   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:12.541590   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.042320   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:13.542036   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.041303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:14.541575   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:11.697196   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:13.697458   77400 pod_ready.go:102] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.861378   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:17.861808   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:16.385523   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:18.883000   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:15.042300   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.542084   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.041582   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:16.541867   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.041409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:17.542019   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.042027   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:18.542266   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.042237   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:19.541613   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:15.697079   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:26:15.697104   77400 pod_ready.go:81] duration metric: took 8.507511233s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:15.697116   77400 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	I0422 18:26:17.704095   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.204276   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.360946   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:22.861202   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.883107   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:23.383119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:20.042039   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:20.541667   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.041765   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:21.542383   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.042213   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.541317   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.042164   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:23.541367   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.042303   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:24.541416   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:22.204697   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.703926   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:24.861797   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.361089   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.384161   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:27.386172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:25.042321   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:25.541554   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.041583   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:26.542179   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.041877   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:27.541400   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:27.541473   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:27.585381   78377 cri.go:89] found id: ""
	I0422 18:26:27.585411   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.585424   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:27.585431   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:27.585503   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:27.622536   78377 cri.go:89] found id: ""
	I0422 18:26:27.622568   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.622578   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:27.622584   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:27.622645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:27.665233   78377 cri.go:89] found id: ""
	I0422 18:26:27.665264   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.665272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:27.665278   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:27.665356   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:27.703600   78377 cri.go:89] found id: ""
	I0422 18:26:27.703629   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.703640   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:27.703647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:27.703706   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:27.741412   78377 cri.go:89] found id: ""
	I0422 18:26:27.741441   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.741451   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:27.741459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:27.741520   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:27.783184   78377 cri.go:89] found id: ""
	I0422 18:26:27.783211   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.783218   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:27.783224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:27.783290   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:27.825404   78377 cri.go:89] found id: ""
	I0422 18:26:27.825433   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.825443   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:27.825450   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:27.825513   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:27.862052   78377 cri.go:89] found id: ""
	I0422 18:26:27.862076   78377 logs.go:276] 0 containers: []
	W0422 18:26:27.862086   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:27.862096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:27.862109   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:27.914533   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:27.914564   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:27.929474   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:27.929502   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:28.054566   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:28.054595   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:28.054612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:28.119416   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:28.119451   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:27.204128   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.207057   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.364913   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.861620   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:29.883085   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:31.883536   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.883927   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:30.667642   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:30.680870   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:30.680930   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:30.719832   78377 cri.go:89] found id: ""
	I0422 18:26:30.719863   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.719874   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:30.719881   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:30.719940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:30.756168   78377 cri.go:89] found id: ""
	I0422 18:26:30.756195   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.756206   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:30.756213   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:30.756267   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:30.792940   78377 cri.go:89] found id: ""
	I0422 18:26:30.792963   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.792971   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:30.792976   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:30.793021   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:30.827452   78377 cri.go:89] found id: ""
	I0422 18:26:30.827480   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.827490   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:30.827497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:30.827563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:30.868058   78377 cri.go:89] found id: ""
	I0422 18:26:30.868088   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.868099   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:30.868107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:30.868170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:30.908639   78377 cri.go:89] found id: ""
	I0422 18:26:30.908672   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.908680   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:30.908686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:30.908735   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:30.959048   78377 cri.go:89] found id: ""
	I0422 18:26:30.959073   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.959080   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:30.959085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:30.959153   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:30.998779   78377 cri.go:89] found id: ""
	I0422 18:26:30.998809   78377 logs.go:276] 0 containers: []
	W0422 18:26:30.998821   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:30.998856   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:30.998875   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:31.053763   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:31.053804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:31.069522   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:31.069558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:31.147512   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:31.147541   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:31.147556   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:31.222713   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:31.222752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:33.765573   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:33.781038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:33.781116   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:33.822148   78377 cri.go:89] found id: ""
	I0422 18:26:33.822175   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.822182   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:33.822187   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:33.822282   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:33.862524   78377 cri.go:89] found id: ""
	I0422 18:26:33.862553   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.862559   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:33.862565   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:33.862626   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:33.905952   78377 cri.go:89] found id: ""
	I0422 18:26:33.905980   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.905991   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:33.905999   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:33.906059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:33.943184   78377 cri.go:89] found id: ""
	I0422 18:26:33.943212   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.943220   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:33.943227   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:33.943285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:33.981677   78377 cri.go:89] found id: ""
	I0422 18:26:33.981712   78377 logs.go:276] 0 containers: []
	W0422 18:26:33.981723   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:33.981731   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:33.981790   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:34.025999   78377 cri.go:89] found id: ""
	I0422 18:26:34.026026   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.026035   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:34.026042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:34.026102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:34.062940   78377 cri.go:89] found id: ""
	I0422 18:26:34.062967   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.062977   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:34.062985   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:34.063044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:34.103112   78377 cri.go:89] found id: ""
	I0422 18:26:34.103153   78377 logs.go:276] 0 containers: []
	W0422 18:26:34.103164   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:34.103175   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:34.103189   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:34.156907   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:34.156944   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:34.171581   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:34.171608   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:34.252755   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:34.252784   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:34.252799   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:34.334118   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:34.334155   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:31.704123   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:34.206443   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:33.863261   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.360525   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.361132   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.385507   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:38.882649   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:36.882905   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:36.897949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:36.898026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:36.934776   78377 cri.go:89] found id: ""
	I0422 18:26:36.934801   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.934808   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:36.934814   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:36.934870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:36.974432   78377 cri.go:89] found id: ""
	I0422 18:26:36.974459   78377 logs.go:276] 0 containers: []
	W0422 18:26:36.974467   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:36.974472   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:36.974519   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:37.011460   78377 cri.go:89] found id: ""
	I0422 18:26:37.011485   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.011496   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:37.011503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:37.011583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:37.056559   78377 cri.go:89] found id: ""
	I0422 18:26:37.056592   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.056604   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:37.056611   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:37.056670   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:37.095328   78377 cri.go:89] found id: ""
	I0422 18:26:37.095359   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.095371   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:37.095379   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:37.095460   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:37.132056   78377 cri.go:89] found id: ""
	I0422 18:26:37.132084   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.132095   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:37.132101   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:37.132162   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:37.168957   78377 cri.go:89] found id: ""
	I0422 18:26:37.168987   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.168998   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:37.169005   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:37.169072   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:37.207501   78377 cri.go:89] found id: ""
	I0422 18:26:37.207533   78377 logs.go:276] 0 containers: []
	W0422 18:26:37.207544   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:37.207553   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:37.207567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:37.289851   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:37.289890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:37.351454   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:37.351481   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:37.409901   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:37.409938   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:37.425203   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:37.425234   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:37.508518   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:36.704473   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:39.204839   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.863837   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.362000   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.887004   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.384351   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:40.008934   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:40.023037   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:40.023096   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:40.066750   78377 cri.go:89] found id: ""
	I0422 18:26:40.066791   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.066811   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:40.066818   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:40.066889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:40.106562   78377 cri.go:89] found id: ""
	I0422 18:26:40.106584   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.106592   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:40.106598   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:40.106644   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:40.145265   78377 cri.go:89] found id: ""
	I0422 18:26:40.145300   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.145311   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:40.145319   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:40.145385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:40.182667   78377 cri.go:89] found id: ""
	I0422 18:26:40.182696   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.182707   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:40.182714   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:40.182772   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:40.227084   78377 cri.go:89] found id: ""
	I0422 18:26:40.227114   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.227139   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:40.227148   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:40.227203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:40.264298   78377 cri.go:89] found id: ""
	I0422 18:26:40.264326   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.264333   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:40.264339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:40.264404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:40.302071   78377 cri.go:89] found id: ""
	I0422 18:26:40.302103   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.302113   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:40.302121   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:40.302191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:40.340031   78377 cri.go:89] found id: ""
	I0422 18:26:40.340072   78377 logs.go:276] 0 containers: []
	W0422 18:26:40.340083   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:40.340094   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:40.340108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:40.386371   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:40.386402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:40.438805   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:40.438884   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:40.455199   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:40.455240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:40.535984   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:40.536006   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:40.536024   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.125605   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:43.139961   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:43.140033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:43.176588   78377 cri.go:89] found id: ""
	I0422 18:26:43.176615   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.176625   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:43.176632   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:43.176695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:43.215868   78377 cri.go:89] found id: ""
	I0422 18:26:43.215900   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.215921   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:43.215929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:43.215991   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:43.253562   78377 cri.go:89] found id: ""
	I0422 18:26:43.253592   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.253603   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:43.253608   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:43.253652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:43.289305   78377 cri.go:89] found id: ""
	I0422 18:26:43.289335   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.289346   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:43.289353   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:43.289417   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:43.329241   78377 cri.go:89] found id: ""
	I0422 18:26:43.329286   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.329295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:43.329300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:43.329351   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:43.369682   78377 cri.go:89] found id: ""
	I0422 18:26:43.369700   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.369707   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:43.369713   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:43.369764   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:43.411788   78377 cri.go:89] found id: ""
	I0422 18:26:43.411812   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.411821   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:43.411829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:43.411911   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:43.447351   78377 cri.go:89] found id: ""
	I0422 18:26:43.447387   78377 logs.go:276] 0 containers: []
	W0422 18:26:43.447398   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:43.447407   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:43.447418   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:43.520087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:43.520114   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:43.520125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:43.602199   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:43.602233   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:43.645723   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:43.645748   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:43.702769   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:43.702804   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:41.704418   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:43.704878   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.362073   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.860279   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:45.385285   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:47.882420   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:46.229598   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:46.243348   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:46.243418   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:46.282470   78377 cri.go:89] found id: ""
	I0422 18:26:46.282500   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.282512   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:46.282519   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:46.282584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:46.327718   78377 cri.go:89] found id: ""
	I0422 18:26:46.327747   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.327755   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:46.327761   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:46.327829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:46.369785   78377 cri.go:89] found id: ""
	I0422 18:26:46.369807   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.369814   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:46.369820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:46.369867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:46.408132   78377 cri.go:89] found id: ""
	I0422 18:26:46.408161   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.408170   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:46.408175   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:46.408236   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:46.450058   78377 cri.go:89] found id: ""
	I0422 18:26:46.450084   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.450091   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:46.450096   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:46.450144   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:46.493747   78377 cri.go:89] found id: ""
	I0422 18:26:46.493776   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.493788   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:46.493794   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:46.493847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:46.529054   78377 cri.go:89] found id: ""
	I0422 18:26:46.529090   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.529102   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:46.529122   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:46.529186   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:46.566699   78377 cri.go:89] found id: ""
	I0422 18:26:46.566724   78377 logs.go:276] 0 containers: []
	W0422 18:26:46.566732   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:46.566740   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:46.566752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:46.582569   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:46.582606   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:46.652188   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:46.652212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:46.652224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:46.732276   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:46.732316   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:46.789834   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:46.789862   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.343229   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:49.357513   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:49.357571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:49.396741   78377 cri.go:89] found id: ""
	I0422 18:26:49.396774   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.396785   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:49.396792   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:49.396862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:49.432048   78377 cri.go:89] found id: ""
	I0422 18:26:49.432081   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.432093   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:49.432100   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:49.432159   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:49.482104   78377 cri.go:89] found id: ""
	I0422 18:26:49.482130   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.482138   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:49.482145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:49.482202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:49.526782   78377 cri.go:89] found id: ""
	I0422 18:26:49.526811   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.526823   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:49.526830   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:49.526884   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:49.575436   78377 cri.go:89] found id: ""
	I0422 18:26:49.575471   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.575482   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:49.575490   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:49.575553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:49.628839   78377 cri.go:89] found id: ""
	I0422 18:26:49.628862   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.628870   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:49.628875   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:49.628940   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:45.706474   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:48.205681   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.860748   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.360586   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.884553   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:51.885527   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.387502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:49.670046   78377 cri.go:89] found id: ""
	I0422 18:26:49.670074   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.670085   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:49.670091   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:49.670158   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:49.707083   78377 cri.go:89] found id: ""
	I0422 18:26:49.707109   78377 logs.go:276] 0 containers: []
	W0422 18:26:49.707119   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:49.707144   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:49.707157   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:49.762794   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:49.762838   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:49.777771   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:49.777801   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:49.853426   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:49.853448   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:49.853463   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:49.934621   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:49.934659   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:52.481352   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:52.495956   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:52.496025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:52.539518   78377 cri.go:89] found id: ""
	I0422 18:26:52.539549   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.539559   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:52.539566   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:52.539627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:52.580604   78377 cri.go:89] found id: ""
	I0422 18:26:52.580632   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.580641   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:52.580646   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:52.580700   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:52.622746   78377 cri.go:89] found id: ""
	I0422 18:26:52.622775   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.622783   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:52.622795   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:52.622858   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:52.659557   78377 cri.go:89] found id: ""
	I0422 18:26:52.659579   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.659587   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:52.659592   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:52.659661   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:52.697653   78377 cri.go:89] found id: ""
	I0422 18:26:52.697678   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.697685   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:52.697691   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:52.697745   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:52.735505   78377 cri.go:89] found id: ""
	I0422 18:26:52.735536   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.735546   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:52.735554   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:52.735616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:52.774216   78377 cri.go:89] found id: ""
	I0422 18:26:52.774239   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.774247   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:52.774261   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:52.774318   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:52.812909   78377 cri.go:89] found id: ""
	I0422 18:26:52.812934   78377 logs.go:276] 0 containers: []
	W0422 18:26:52.812941   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:52.812949   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:52.812981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:52.897636   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:52.897663   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:52.897679   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:52.985013   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:52.985046   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:53.031395   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:53.031427   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:53.088446   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:53.088480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:50.703624   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:52.704794   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.204187   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:54.861314   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:57.360430   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:56.882974   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:58.884770   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:55.603647   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:55.617977   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:55.618039   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:55.663769   78377 cri.go:89] found id: ""
	I0422 18:26:55.663797   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.663815   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:55.663822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:55.663925   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:55.701287   78377 cri.go:89] found id: ""
	I0422 18:26:55.701326   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.701338   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:55.701346   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:55.701435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:55.740041   78377 cri.go:89] found id: ""
	I0422 18:26:55.740067   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.740078   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:55.740107   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:55.740163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:55.779093   78377 cri.go:89] found id: ""
	I0422 18:26:55.779143   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.779154   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:55.779170   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:55.779219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:55.822107   78377 cri.go:89] found id: ""
	I0422 18:26:55.822133   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.822141   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:55.822146   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:55.822195   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:55.862157   78377 cri.go:89] found id: ""
	I0422 18:26:55.862204   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.862215   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:55.862224   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:55.862295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:55.902557   78377 cri.go:89] found id: ""
	I0422 18:26:55.902582   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.902595   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:55.902601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:55.902663   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:55.942185   78377 cri.go:89] found id: ""
	I0422 18:26:55.942215   78377 logs.go:276] 0 containers: []
	W0422 18:26:55.942226   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:55.942237   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:55.942252   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:55.957050   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:55.957083   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:56.035015   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:56.035043   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:56.035058   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:56.125595   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:56.125636   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:56.169096   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:56.169131   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:58.725079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:26:58.739736   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:26:58.739808   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:26:58.777724   78377 cri.go:89] found id: ""
	I0422 18:26:58.777752   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.777762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:26:58.777769   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:26:58.777828   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:26:58.814668   78377 cri.go:89] found id: ""
	I0422 18:26:58.814702   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.814713   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:26:58.814721   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:26:58.814791   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:26:58.852609   78377 cri.go:89] found id: ""
	I0422 18:26:58.852634   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.852648   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:26:58.852655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:26:58.852720   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:26:58.891881   78377 cri.go:89] found id: ""
	I0422 18:26:58.891904   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.891910   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:26:58.891936   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:26:58.891994   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:26:58.931663   78377 cri.go:89] found id: ""
	I0422 18:26:58.931690   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.931701   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:26:58.931708   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:26:58.931782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:26:58.967795   78377 cri.go:89] found id: ""
	I0422 18:26:58.967816   78377 logs.go:276] 0 containers: []
	W0422 18:26:58.967823   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:26:58.967829   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:26:58.967879   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:26:59.008898   78377 cri.go:89] found id: ""
	I0422 18:26:59.008932   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.008943   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:26:59.008950   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:26:59.009007   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:26:59.049230   78377 cri.go:89] found id: ""
	I0422 18:26:59.049267   78377 logs.go:276] 0 containers: []
	W0422 18:26:59.049278   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:26:59.049288   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:26:59.049304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:26:59.104461   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:26:59.104508   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:26:59.119555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:26:59.119584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:26:59.195905   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:26:59.195952   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:26:59.195969   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:26:59.276319   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:26:59.276360   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:26:57.703613   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:00.205449   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:26:59.861376   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.862613   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.386313   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:03.883728   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:01.818221   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:01.833234   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:01.833294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:01.870997   78377 cri.go:89] found id: ""
	I0422 18:27:01.871022   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.871030   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:01.871036   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:01.871102   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:01.910414   78377 cri.go:89] found id: ""
	I0422 18:27:01.910443   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.910453   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:01.910461   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:01.910526   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:01.949499   78377 cri.go:89] found id: ""
	I0422 18:27:01.949524   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.949532   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:01.949537   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:01.949598   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:01.987702   78377 cri.go:89] found id: ""
	I0422 18:27:01.987736   78377 logs.go:276] 0 containers: []
	W0422 18:27:01.987747   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:01.987763   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:01.987836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:02.027193   78377 cri.go:89] found id: ""
	I0422 18:27:02.027222   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.027233   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:02.027240   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:02.027332   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:02.067537   78377 cri.go:89] found id: ""
	I0422 18:27:02.067564   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.067578   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:02.067584   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:02.067631   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:02.111085   78377 cri.go:89] found id: ""
	I0422 18:27:02.111112   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.111119   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:02.111140   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:02.111194   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:02.150730   78377 cri.go:89] found id: ""
	I0422 18:27:02.150760   78377 logs.go:276] 0 containers: []
	W0422 18:27:02.150769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:02.150777   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:02.150789   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:02.230124   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:02.230150   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:02.230164   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:02.315337   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:02.315384   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:02.362022   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:02.362048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:02.421884   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:02.421924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:02.205610   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.704158   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.359865   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:06.359968   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.360935   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:05.884072   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:08.386493   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:04.937145   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:04.952303   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:04.952412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:04.995024   78377 cri.go:89] found id: ""
	I0422 18:27:04.995059   78377 logs.go:276] 0 containers: []
	W0422 18:27:04.995071   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:04.995079   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:04.995151   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:05.035094   78377 cri.go:89] found id: ""
	I0422 18:27:05.035129   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.035141   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:05.035148   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:05.035204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:05.074178   78377 cri.go:89] found id: ""
	I0422 18:27:05.074204   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.074215   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:05.074222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:05.074294   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:05.115285   78377 cri.go:89] found id: ""
	I0422 18:27:05.115313   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.115324   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:05.115331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:05.115398   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:05.151000   78377 cri.go:89] found id: ""
	I0422 18:27:05.151032   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.151041   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:05.151047   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:05.151189   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:05.191627   78377 cri.go:89] found id: ""
	I0422 18:27:05.191651   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.191659   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:05.191664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:05.191710   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:05.232141   78377 cri.go:89] found id: ""
	I0422 18:27:05.232173   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.232183   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:05.232191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:05.232252   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:05.268498   78377 cri.go:89] found id: ""
	I0422 18:27:05.268523   78377 logs.go:276] 0 containers: []
	W0422 18:27:05.268530   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:05.268537   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:05.268554   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:05.315909   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:05.315937   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:05.369623   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:05.369664   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:05.387343   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:05.387381   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:05.466087   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:05.466106   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:05.466117   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:08.053578   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:08.067569   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:08.067627   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:08.108274   78377 cri.go:89] found id: ""
	I0422 18:27:08.108307   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.108318   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:08.108325   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:08.108384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:08.155343   78377 cri.go:89] found id: ""
	I0422 18:27:08.155366   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.155373   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:08.155379   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:08.155435   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:08.194636   78377 cri.go:89] found id: ""
	I0422 18:27:08.194661   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.194672   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:08.194677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:08.194724   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:08.232992   78377 cri.go:89] found id: ""
	I0422 18:27:08.233017   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.233024   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:08.233029   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:08.233076   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:08.271349   78377 cri.go:89] found id: ""
	I0422 18:27:08.271381   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.271391   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:08.271407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:08.271459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:08.311991   78377 cri.go:89] found id: ""
	I0422 18:27:08.312021   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.312033   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:08.312042   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:08.312097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:08.353301   78377 cri.go:89] found id: ""
	I0422 18:27:08.353326   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.353333   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:08.353340   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:08.353399   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:08.391989   78377 cri.go:89] found id: ""
	I0422 18:27:08.392015   78377 logs.go:276] 0 containers: []
	W0422 18:27:08.392025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:08.392035   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:08.392048   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:08.437228   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:08.437260   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:08.489086   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:08.489121   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:08.503588   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:08.503616   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:08.583824   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:08.583845   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:08.583858   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:07.203802   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:09.204754   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.862854   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.361215   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:10.883779   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:12.883989   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:11.164702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:11.178228   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:11.178293   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:11.217691   78377 cri.go:89] found id: ""
	I0422 18:27:11.217719   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.217729   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:11.217735   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:11.217796   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:11.253648   78377 cri.go:89] found id: ""
	I0422 18:27:11.253676   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.253685   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:11.253692   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:11.253753   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:11.290934   78377 cri.go:89] found id: ""
	I0422 18:27:11.290968   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.290979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:11.290988   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:11.291051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:11.331215   78377 cri.go:89] found id: ""
	I0422 18:27:11.331240   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.331249   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:11.331254   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:11.331344   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:11.371595   78377 cri.go:89] found id: ""
	I0422 18:27:11.371621   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.371629   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:11.371634   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:11.371697   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:11.413577   78377 cri.go:89] found id: ""
	I0422 18:27:11.413607   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.413616   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:11.413624   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:11.413684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:11.450669   78377 cri.go:89] found id: ""
	I0422 18:27:11.450700   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.450709   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:11.450717   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:11.450779   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:11.488096   78377 cri.go:89] found id: ""
	I0422 18:27:11.488122   78377 logs.go:276] 0 containers: []
	W0422 18:27:11.488131   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:11.488142   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:11.488156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.540258   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:11.540299   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:11.555878   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:11.555922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:11.638190   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:11.638212   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:11.638224   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:11.719691   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:11.719726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:14.268811   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:14.283695   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:14.283749   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:14.323252   78377 cri.go:89] found id: ""
	I0422 18:27:14.323286   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.323299   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:14.323306   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:14.323370   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:14.362354   78377 cri.go:89] found id: ""
	I0422 18:27:14.362375   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.362382   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:14.362387   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:14.362450   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:14.405439   78377 cri.go:89] found id: ""
	I0422 18:27:14.405460   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.405467   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:14.405473   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:14.405531   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:14.445358   78377 cri.go:89] found id: ""
	I0422 18:27:14.445389   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.445399   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:14.445407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:14.445476   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:14.481933   78377 cri.go:89] found id: ""
	I0422 18:27:14.481961   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.481969   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:14.481974   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:14.482033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:14.526992   78377 cri.go:89] found id: ""
	I0422 18:27:14.527019   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.527028   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:14.527040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:14.527089   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:14.562197   78377 cri.go:89] found id: ""
	I0422 18:27:14.562221   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.562229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:14.562238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:14.562287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:14.599098   78377 cri.go:89] found id: ""
	I0422 18:27:14.599141   78377 logs.go:276] 0 containers: []
	W0422 18:27:14.599153   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:14.599164   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:14.599177   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:11.205525   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:13.706785   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:15.861009   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.861214   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.884371   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:17.384911   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:14.655768   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:14.655800   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:14.670894   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:14.670929   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:14.759845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:14.759863   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:14.759874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:14.839715   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:14.839752   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:17.384859   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:17.399664   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:17.399741   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:17.439786   78377 cri.go:89] found id: ""
	I0422 18:27:17.439809   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.439817   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:17.439822   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:17.439878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:17.476532   78377 cri.go:89] found id: ""
	I0422 18:27:17.476553   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.476561   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:17.476566   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:17.476623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:17.513464   78377 cri.go:89] found id: ""
	I0422 18:27:17.513488   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.513495   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:17.513500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:17.513546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:17.548793   78377 cri.go:89] found id: ""
	I0422 18:27:17.548821   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.548831   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:17.548838   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:17.548888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:17.584600   78377 cri.go:89] found id: ""
	I0422 18:27:17.584626   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.584636   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:17.584644   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:17.584705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:17.621574   78377 cri.go:89] found id: ""
	I0422 18:27:17.621603   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.621615   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:17.621622   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:17.621686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:17.663252   78377 cri.go:89] found id: ""
	I0422 18:27:17.663283   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.663290   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:17.663295   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:17.663352   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:17.702987   78377 cri.go:89] found id: ""
	I0422 18:27:17.703014   78377 logs.go:276] 0 containers: []
	W0422 18:27:17.703025   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:17.703035   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:17.703049   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:17.758182   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:17.758222   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:17.775796   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:17.775828   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:17.866450   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:17.866493   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:17.866507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:17.947651   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:17.947685   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:16.204000   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:18.704622   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.864836   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:22.360984   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:19.883393   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:21.885743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.384476   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:20.489441   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:20.502920   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:20.502987   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:20.540533   78377 cri.go:89] found id: ""
	I0422 18:27:20.540557   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.540565   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:20.540569   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:20.540612   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:20.578789   78377 cri.go:89] found id: ""
	I0422 18:27:20.578815   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.578824   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:20.578832   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:20.578900   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:20.613481   78377 cri.go:89] found id: ""
	I0422 18:27:20.613515   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.613525   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:20.613533   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:20.613597   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:20.650289   78377 cri.go:89] found id: ""
	I0422 18:27:20.650320   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.650331   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:20.650339   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:20.650400   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:20.686259   78377 cri.go:89] found id: ""
	I0422 18:27:20.686288   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.686300   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:20.686306   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:20.686367   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:20.725983   78377 cri.go:89] found id: ""
	I0422 18:27:20.726011   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.726018   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:20.726024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:20.726092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:20.762193   78377 cri.go:89] found id: ""
	I0422 18:27:20.762220   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.762229   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:20.762237   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:20.762295   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:20.800738   78377 cri.go:89] found id: ""
	I0422 18:27:20.800761   78377 logs.go:276] 0 containers: []
	W0422 18:27:20.800769   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:20.800776   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:20.800787   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.842744   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:20.842771   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:20.896307   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:20.896337   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:20.911457   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:20.911485   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:20.985249   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:20.985277   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:20.985293   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:23.560513   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:23.585134   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:23.585214   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:23.624947   78377 cri.go:89] found id: ""
	I0422 18:27:23.624972   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.624980   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:23.624986   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:23.625051   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:23.661886   78377 cri.go:89] found id: ""
	I0422 18:27:23.661915   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.661924   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:23.661929   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:23.661997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:23.701061   78377 cri.go:89] found id: ""
	I0422 18:27:23.701087   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.701097   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:23.701104   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:23.701163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:23.742728   78377 cri.go:89] found id: ""
	I0422 18:27:23.742753   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.742760   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:23.742765   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:23.742813   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:23.786970   78377 cri.go:89] found id: ""
	I0422 18:27:23.787002   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.787011   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:23.787017   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:23.787070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:23.825253   78377 cri.go:89] found id: ""
	I0422 18:27:23.825282   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.825292   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:23.825300   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:23.825357   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:23.865774   78377 cri.go:89] found id: ""
	I0422 18:27:23.865799   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.865807   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:23.865812   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:23.865860   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:23.903212   78377 cri.go:89] found id: ""
	I0422 18:27:23.903239   78377 logs.go:276] 0 containers: []
	W0422 18:27:23.903247   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:23.903254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:23.903267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:23.958931   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:23.958968   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:23.973352   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:23.973383   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:24.053335   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:24.053356   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:24.053367   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:24.136491   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:24.136528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:20.704821   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:23.203548   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:25.204601   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:24.361665   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.361708   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.388979   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.882505   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:26.679983   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:26.694521   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:26.694583   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:26.733114   78377 cri.go:89] found id: ""
	I0422 18:27:26.733146   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.733156   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:26.733163   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:26.733221   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:26.776882   78377 cri.go:89] found id: ""
	I0422 18:27:26.776906   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.776913   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:26.776918   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:26.776966   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:26.822830   78377 cri.go:89] found id: ""
	I0422 18:27:26.822863   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.822874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:26.822882   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:26.822945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:26.868600   78377 cri.go:89] found id: ""
	I0422 18:27:26.868633   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.868641   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:26.868655   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:26.868712   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:26.907547   78377 cri.go:89] found id: ""
	I0422 18:27:26.907570   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.907578   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:26.907583   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:26.907640   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:26.947594   78377 cri.go:89] found id: ""
	I0422 18:27:26.947635   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.947647   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:26.947656   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:26.947715   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:26.986732   78377 cri.go:89] found id: ""
	I0422 18:27:26.986761   78377 logs.go:276] 0 containers: []
	W0422 18:27:26.986772   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:26.986780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:26.986838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:27.024338   78377 cri.go:89] found id: ""
	I0422 18:27:27.024370   78377 logs.go:276] 0 containers: []
	W0422 18:27:27.024378   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:27.024385   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:27.024396   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:27.077071   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:27.077112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:27.092664   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:27.092694   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:27.173056   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:27.173081   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:27.173099   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:27.257836   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:27.257877   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:27.714190   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.204420   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:28.861728   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:31.360750   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.360969   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:30.883051   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:33.386563   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:29.800456   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:29.816085   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:29.816150   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:29.858826   78377 cri.go:89] found id: ""
	I0422 18:27:29.858857   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.858878   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:29.858886   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:29.858956   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:29.900369   78377 cri.go:89] found id: ""
	I0422 18:27:29.900403   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.900417   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:29.900424   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:29.900490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:29.939766   78377 cri.go:89] found id: ""
	I0422 18:27:29.939801   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.939811   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:29.939818   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:29.939889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:29.986579   78377 cri.go:89] found id: ""
	I0422 18:27:29.986607   78377 logs.go:276] 0 containers: []
	W0422 18:27:29.986617   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:29.986625   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:29.986685   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:30.030059   78377 cri.go:89] found id: ""
	I0422 18:27:30.030090   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.030102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:30.030110   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:30.030192   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:30.077543   78377 cri.go:89] found id: ""
	I0422 18:27:30.077573   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.077581   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:30.077586   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:30.077645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:30.123087   78377 cri.go:89] found id: ""
	I0422 18:27:30.123116   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.123137   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:30.123145   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:30.123203   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:30.160589   78377 cri.go:89] found id: ""
	I0422 18:27:30.160613   78377 logs.go:276] 0 containers: []
	W0422 18:27:30.160621   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:30.160628   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:30.160639   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:30.213321   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:30.213352   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:30.228102   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:30.228129   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:30.303977   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:30.304013   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:30.304029   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:30.383817   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:30.383851   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:32.930619   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:32.943854   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:32.943914   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:32.984112   78377 cri.go:89] found id: ""
	I0422 18:27:32.984138   78377 logs.go:276] 0 containers: []
	W0422 18:27:32.984146   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:32.984151   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:32.984200   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:33.022243   78377 cri.go:89] found id: ""
	I0422 18:27:33.022283   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.022294   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:33.022301   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:33.022366   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:33.061177   78377 cri.go:89] found id: ""
	I0422 18:27:33.061205   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.061214   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:33.061222   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:33.061281   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:33.104430   78377 cri.go:89] found id: ""
	I0422 18:27:33.104458   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.104466   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:33.104471   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:33.104528   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:33.140255   78377 cri.go:89] found id: ""
	I0422 18:27:33.140284   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.140295   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:33.140302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:33.140362   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:33.179487   78377 cri.go:89] found id: ""
	I0422 18:27:33.179512   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.179519   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:33.179524   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:33.179576   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:33.217226   78377 cri.go:89] found id: ""
	I0422 18:27:33.217258   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.217265   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:33.217271   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:33.217319   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:33.257076   78377 cri.go:89] found id: ""
	I0422 18:27:33.257104   78377 logs.go:276] 0 containers: []
	W0422 18:27:33.257114   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:33.257123   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:33.257137   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:33.271183   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:33.271211   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:33.344812   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:33.344843   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:33.344859   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:33.420605   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:33.420640   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:33.465779   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:33.465807   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:32.704424   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:34.705215   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.861184   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.361048   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:35.883602   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:38.383601   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:36.019062   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:36.039226   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:36.039305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:36.082940   78377 cri.go:89] found id: ""
	I0422 18:27:36.082978   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.082991   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:36.083000   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:36.083063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:36.120371   78377 cri.go:89] found id: ""
	I0422 18:27:36.120416   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.120428   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:36.120436   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:36.120496   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:36.158018   78377 cri.go:89] found id: ""
	I0422 18:27:36.158051   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.158063   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:36.158070   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:36.158131   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:36.196192   78377 cri.go:89] found id: ""
	I0422 18:27:36.196221   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.196231   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:36.196238   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:36.196305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:36.237742   78377 cri.go:89] found id: ""
	I0422 18:27:36.237773   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.237784   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:36.237791   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:36.237852   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:36.277884   78377 cri.go:89] found id: ""
	I0422 18:27:36.277911   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.277918   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:36.277923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:36.277993   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:36.314897   78377 cri.go:89] found id: ""
	I0422 18:27:36.314929   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.314939   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:36.314947   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:36.315009   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:36.354806   78377 cri.go:89] found id: ""
	I0422 18:27:36.354833   78377 logs.go:276] 0 containers: []
	W0422 18:27:36.354843   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:36.354851   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:36.354863   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:36.406941   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:36.406981   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:36.423308   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:36.423344   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:36.507202   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:36.507223   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:36.507238   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:36.582489   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:36.582525   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:39.127409   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:39.140820   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:39.140895   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:39.182068   78377 cri.go:89] found id: ""
	I0422 18:27:39.182094   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.182105   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:39.182112   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:39.182169   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:39.222711   78377 cri.go:89] found id: ""
	I0422 18:27:39.222735   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.222751   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:39.222756   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:39.222827   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:39.263396   78377 cri.go:89] found id: ""
	I0422 18:27:39.263423   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.263432   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:39.263437   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:39.263490   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:39.300559   78377 cri.go:89] found id: ""
	I0422 18:27:39.300589   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.300603   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:39.300610   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:39.300672   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:39.336486   78377 cri.go:89] found id: ""
	I0422 18:27:39.336521   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.336530   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:39.336536   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:39.336584   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:39.373985   78377 cri.go:89] found id: ""
	I0422 18:27:39.374020   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.374030   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:39.374038   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:39.374097   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:39.412511   78377 cri.go:89] found id: ""
	I0422 18:27:39.412540   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.412547   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:39.412553   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:39.412616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:39.459197   78377 cri.go:89] found id: ""
	I0422 18:27:39.459233   78377 logs.go:276] 0 containers: []
	W0422 18:27:39.459243   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:39.459254   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:39.459269   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:39.514579   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:39.514623   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:39.530082   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:39.530107   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:39.603797   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:39.603830   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:39.603854   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:37.203082   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.204563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.860739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.861544   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:40.385271   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:42.389273   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:39.684853   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:39.684890   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:42.227702   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:42.243438   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:42.243499   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:42.290374   78377 cri.go:89] found id: ""
	I0422 18:27:42.290402   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.290413   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:42.290420   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:42.290481   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:42.332793   78377 cri.go:89] found id: ""
	I0422 18:27:42.332828   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.332840   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:42.332875   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:42.332937   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:42.375844   78377 cri.go:89] found id: ""
	I0422 18:27:42.375876   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.375884   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:42.375889   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:42.375945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:42.419725   78377 cri.go:89] found id: ""
	I0422 18:27:42.419758   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.419769   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:42.419777   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:42.419878   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:42.453969   78377 cri.go:89] found id: ""
	I0422 18:27:42.454004   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.454014   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:42.454022   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:42.454080   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:42.489045   78377 cri.go:89] found id: ""
	I0422 18:27:42.489077   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.489087   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:42.489095   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:42.489157   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:42.529127   78377 cri.go:89] found id: ""
	I0422 18:27:42.529155   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.529166   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:42.529174   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:42.529229   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:42.566253   78377 cri.go:89] found id: ""
	I0422 18:27:42.566278   78377 logs.go:276] 0 containers: []
	W0422 18:27:42.566286   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:42.566293   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:42.566307   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:42.622054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:42.622101   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:42.636278   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:42.636304   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:42.712179   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:42.712203   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:42.712215   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:42.791885   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:42.791928   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:41.705615   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.203947   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.361656   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:47.860929   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:44.882684   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:46.886119   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:49.382017   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:45.337091   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:45.353053   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:45.353133   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:45.393230   78377 cri.go:89] found id: ""
	I0422 18:27:45.393257   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.393267   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:45.393274   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:45.393330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:45.432183   78377 cri.go:89] found id: ""
	I0422 18:27:45.432210   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.432220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:45.432228   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:45.432285   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:45.468114   78377 cri.go:89] found id: ""
	I0422 18:27:45.468147   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.468157   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:45.468169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:45.468233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:45.504793   78377 cri.go:89] found id: ""
	I0422 18:27:45.504817   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.504836   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:45.504841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:45.504889   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:45.544822   78377 cri.go:89] found id: ""
	I0422 18:27:45.544851   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.544862   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:45.544868   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:45.544934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:45.588266   78377 cri.go:89] found id: ""
	I0422 18:27:45.588289   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.588322   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:45.588330   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:45.588391   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:45.625549   78377 cri.go:89] found id: ""
	I0422 18:27:45.625576   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.625583   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:45.625589   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:45.625639   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:45.663066   78377 cri.go:89] found id: ""
	I0422 18:27:45.663096   78377 logs.go:276] 0 containers: []
	W0422 18:27:45.663104   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:45.663114   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:45.663143   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:45.715051   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:45.715082   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:45.729496   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:45.729523   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:45.801270   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:45.801296   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:45.801312   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:45.886530   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:45.886561   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:48.429822   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:48.444528   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:48.444610   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:48.483164   78377 cri.go:89] found id: ""
	I0422 18:27:48.483194   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.483204   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:48.483210   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:48.483257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:48.520295   78377 cri.go:89] found id: ""
	I0422 18:27:48.520321   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.520328   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:48.520333   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:48.520378   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:48.558839   78377 cri.go:89] found id: ""
	I0422 18:27:48.558866   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.558875   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:48.558881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:48.558939   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:48.599692   78377 cri.go:89] found id: ""
	I0422 18:27:48.599715   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.599722   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:48.599728   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:48.599773   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:48.638457   78377 cri.go:89] found id: ""
	I0422 18:27:48.638486   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.638494   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:48.638500   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:48.638561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:48.677344   78377 cri.go:89] found id: ""
	I0422 18:27:48.677383   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.677395   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:48.677402   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:48.677466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:48.717129   78377 cri.go:89] found id: ""
	I0422 18:27:48.717155   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.717163   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:48.717169   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:48.717219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:48.758256   78377 cri.go:89] found id: ""
	I0422 18:27:48.758281   78377 logs.go:276] 0 containers: []
	W0422 18:27:48.758289   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:48.758297   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:48.758311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:48.810377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:48.810415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:48.824919   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:48.824949   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:48.908446   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:48.908473   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:48.908569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:48.984952   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:48.984991   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:46.703083   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:48.705413   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:50.361465   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:52.364509   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.384561   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.882657   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:51.527387   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:51.541482   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:51.541560   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.579020   78377 cri.go:89] found id: ""
	I0422 18:27:51.579098   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.579114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:51.579134   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:51.579204   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:51.616430   78377 cri.go:89] found id: ""
	I0422 18:27:51.616456   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.616465   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:51.616470   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:51.616516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:51.654089   78377 cri.go:89] found id: ""
	I0422 18:27:51.654120   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.654131   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:51.654138   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:51.654201   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:51.693945   78377 cri.go:89] found id: ""
	I0422 18:27:51.693979   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.693993   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:51.694000   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:51.694068   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:51.732873   78377 cri.go:89] found id: ""
	I0422 18:27:51.732906   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.732917   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:51.732923   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:51.732990   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:51.770772   78377 cri.go:89] found id: ""
	I0422 18:27:51.770794   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.770801   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:51.770807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:51.770862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:51.819370   78377 cri.go:89] found id: ""
	I0422 18:27:51.819397   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.819405   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:51.819411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:51.819459   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:51.858001   78377 cri.go:89] found id: ""
	I0422 18:27:51.858033   78377 logs.go:276] 0 containers: []
	W0422 18:27:51.858044   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:51.858055   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:51.858069   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:51.938531   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:51.938557   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:51.938571   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:52.014397   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:52.014435   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:52.059420   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:52.059458   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:52.119498   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:52.119534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:54.634238   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:54.649044   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:54.649119   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:51.203623   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:53.205834   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.863919   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.360796   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:56.383743   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:58.383783   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:54.691846   78377 cri.go:89] found id: ""
	I0422 18:27:54.691879   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.691890   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:54.691907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:54.691970   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:54.731466   78377 cri.go:89] found id: ""
	I0422 18:27:54.731496   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.731507   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:54.731515   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:54.731588   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:54.776948   78377 cri.go:89] found id: ""
	I0422 18:27:54.776972   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.776979   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:54.776984   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:54.777031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:54.815908   78377 cri.go:89] found id: ""
	I0422 18:27:54.815939   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.815946   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:54.815952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:54.815997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:54.856641   78377 cri.go:89] found id: ""
	I0422 18:27:54.856673   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.856684   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:54.856690   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:54.856757   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:54.896968   78377 cri.go:89] found id: ""
	I0422 18:27:54.896996   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.897004   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:54.897009   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:54.897073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:54.936353   78377 cri.go:89] found id: ""
	I0422 18:27:54.936388   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.936400   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:54.936407   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:54.936468   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:54.976009   78377 cri.go:89] found id: ""
	I0422 18:27:54.976038   78377 logs.go:276] 0 containers: []
	W0422 18:27:54.976048   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:54.976058   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:54.976071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:55.027890   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:55.027924   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:55.041914   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:55.041939   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:55.112556   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.112583   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:55.112597   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:55.187688   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:55.187723   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:57.730259   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:27:57.745006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:27:57.745073   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:27:57.786906   78377 cri.go:89] found id: ""
	I0422 18:27:57.786942   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.786952   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:27:57.786959   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:27:57.787019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:27:57.827158   78377 cri.go:89] found id: ""
	I0422 18:27:57.827188   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.827199   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:27:57.827206   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:27:57.827254   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:27:57.864370   78377 cri.go:89] found id: ""
	I0422 18:27:57.864405   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.864413   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:27:57.864419   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:27:57.864475   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:27:57.903747   78377 cri.go:89] found id: ""
	I0422 18:27:57.903773   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.903781   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:27:57.903786   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:27:57.903846   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:27:57.941674   78377 cri.go:89] found id: ""
	I0422 18:27:57.941705   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.941713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:27:57.941718   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:27:57.941767   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:27:57.984888   78377 cri.go:89] found id: ""
	I0422 18:27:57.984918   78377 logs.go:276] 0 containers: []
	W0422 18:27:57.984929   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:27:57.984935   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:27:57.984980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:27:58.026964   78377 cri.go:89] found id: ""
	I0422 18:27:58.026993   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.027006   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:27:58.027012   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:27:58.027059   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:27:58.065403   78377 cri.go:89] found id: ""
	I0422 18:27:58.065430   78377 logs.go:276] 0 containers: []
	W0422 18:27:58.065440   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:27:58.065450   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:27:58.065464   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:27:58.152471   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:27:58.152518   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:27:58.198766   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:27:58.198803   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:27:58.257760   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:27:58.257798   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:27:58.272656   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:27:58.272693   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:27:58.385784   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:27:55.703110   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:57.704061   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.704421   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:27:59.361229   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:01.362273   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.385750   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:02.886349   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:00.886736   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:00.902607   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:00.902684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:00.941476   78377 cri.go:89] found id: ""
	I0422 18:28:00.941506   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.941515   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:00.941521   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:00.941571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:00.983107   78377 cri.go:89] found id: ""
	I0422 18:28:00.983142   78377 logs.go:276] 0 containers: []
	W0422 18:28:00.983152   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:00.983159   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:00.983216   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:01.024419   78377 cri.go:89] found id: ""
	I0422 18:28:01.024448   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.024455   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:01.024461   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:01.024517   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:01.065941   78377 cri.go:89] found id: ""
	I0422 18:28:01.065973   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.065984   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:01.065992   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:01.066041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:01.107857   78377 cri.go:89] found id: ""
	I0422 18:28:01.107898   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.107908   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:01.107916   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:01.107980   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:01.149626   78377 cri.go:89] found id: ""
	I0422 18:28:01.149657   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.149667   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:01.149676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:01.149740   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:01.190491   78377 cri.go:89] found id: ""
	I0422 18:28:01.190520   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.190529   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:01.190535   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:01.190590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:01.231145   78377 cri.go:89] found id: ""
	I0422 18:28:01.231176   78377 logs.go:276] 0 containers: []
	W0422 18:28:01.231187   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:01.231197   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:01.231208   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:01.317826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:01.317874   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:01.369441   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:01.369478   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:01.432210   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:01.432251   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:01.446720   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:01.446749   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:01.528643   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.029816   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:04.044751   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:04.044836   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:04.085044   78377 cri.go:89] found id: ""
	I0422 18:28:04.085077   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.085089   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:04.085097   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:04.085148   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:04.129071   78377 cri.go:89] found id: ""
	I0422 18:28:04.129100   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.129111   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:04.129118   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:04.129181   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:04.167838   78377 cri.go:89] found id: ""
	I0422 18:28:04.167864   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.167874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:04.167881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:04.167943   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:04.216283   78377 cri.go:89] found id: ""
	I0422 18:28:04.216313   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.216321   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:04.216327   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:04.216376   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:04.255693   78377 cri.go:89] found id: ""
	I0422 18:28:04.255724   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.255731   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:04.255737   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:04.255786   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:04.293601   78377 cri.go:89] found id: ""
	I0422 18:28:04.293639   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.293651   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:04.293659   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:04.293709   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:04.358730   78377 cri.go:89] found id: ""
	I0422 18:28:04.358755   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.358767   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:04.358774   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:04.358837   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:04.399231   78377 cri.go:89] found id: ""
	I0422 18:28:04.399261   78377 logs.go:276] 0 containers: []
	W0422 18:28:04.399271   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:04.399280   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:04.399291   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:04.415526   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:04.415558   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:04.491845   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:04.491871   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:04.491885   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:04.575076   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:04.575148   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:04.621931   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:04.621956   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:02.203877   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:04.204896   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:03.860506   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.860713   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:05.384180   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.884714   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:07.173117   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:07.188914   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:07.188973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:07.233867   78377 cri.go:89] found id: ""
	I0422 18:28:07.233894   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.233902   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:07.233907   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:07.233968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:07.274777   78377 cri.go:89] found id: ""
	I0422 18:28:07.274818   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.274828   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:07.274835   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:07.274897   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:07.310813   78377 cri.go:89] found id: ""
	I0422 18:28:07.310864   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.310874   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:07.310881   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:07.310951   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:07.348397   78377 cri.go:89] found id: ""
	I0422 18:28:07.348423   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.348431   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:07.348436   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:07.348489   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:07.387344   78377 cri.go:89] found id: ""
	I0422 18:28:07.387371   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.387381   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:07.387388   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:07.387443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:07.426117   78377 cri.go:89] found id: ""
	I0422 18:28:07.426147   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.426158   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:07.426166   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:07.426233   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:07.466624   78377 cri.go:89] found id: ""
	I0422 18:28:07.466653   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.466664   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:07.466671   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:07.466729   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:07.504282   78377 cri.go:89] found id: ""
	I0422 18:28:07.504306   78377 logs.go:276] 0 containers: []
	W0422 18:28:07.504342   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:07.504353   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:07.504369   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:07.584111   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:07.584146   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:07.627212   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:07.627240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:07.676814   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:07.676849   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:07.691117   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:07.691156   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:07.764300   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:06.206560   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.703406   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:08.364348   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.861760   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.361127   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.392330   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:12.883081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:10.265313   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:10.280094   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:10.280170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:10.318208   78377 cri.go:89] found id: ""
	I0422 18:28:10.318236   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.318245   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:10.318251   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:10.318305   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:10.353450   78377 cri.go:89] found id: ""
	I0422 18:28:10.353477   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.353484   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:10.353490   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:10.353547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:10.398359   78377 cri.go:89] found id: ""
	I0422 18:28:10.398389   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.398400   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:10.398411   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:10.398474   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:10.435896   78377 cri.go:89] found id: ""
	I0422 18:28:10.435928   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.435939   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:10.435946   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:10.436025   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:10.479313   78377 cri.go:89] found id: ""
	I0422 18:28:10.479342   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.479353   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:10.479360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:10.479433   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:10.521949   78377 cri.go:89] found id: ""
	I0422 18:28:10.521978   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.521990   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:10.521997   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:10.522054   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:10.557697   78377 cri.go:89] found id: ""
	I0422 18:28:10.557722   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.557732   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:10.557739   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:10.557804   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:10.595060   78377 cri.go:89] found id: ""
	I0422 18:28:10.595090   78377 logs.go:276] 0 containers: []
	W0422 18:28:10.595102   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:10.595112   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:10.595142   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:10.649535   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:10.649570   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:10.664176   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:10.664210   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:10.748778   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:10.748818   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:10.748839   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:10.858019   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:10.858062   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:13.405737   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:13.420265   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:13.420342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:13.456505   78377 cri.go:89] found id: ""
	I0422 18:28:13.456534   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.456545   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:13.456551   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:13.456611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:13.493435   78377 cri.go:89] found id: ""
	I0422 18:28:13.493464   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.493477   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:13.493485   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:13.493541   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:13.530572   78377 cri.go:89] found id: ""
	I0422 18:28:13.530602   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.530614   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:13.530620   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:13.530682   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:13.565448   78377 cri.go:89] found id: ""
	I0422 18:28:13.565472   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.565480   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:13.565485   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:13.565574   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:13.613806   78377 cri.go:89] found id: ""
	I0422 18:28:13.613840   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.613851   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:13.613860   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:13.613924   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:13.649483   78377 cri.go:89] found id: ""
	I0422 18:28:13.649511   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.649522   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:13.649529   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:13.649589   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:13.689149   78377 cri.go:89] found id: ""
	I0422 18:28:13.689182   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.689193   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:13.689200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:13.689257   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:13.726431   78377 cri.go:89] found id: ""
	I0422 18:28:13.726454   78377 logs.go:276] 0 containers: []
	W0422 18:28:13.726461   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:13.726468   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:13.726480   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:13.782843   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:13.782882   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:13.797390   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:13.797415   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:13.877880   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:13.877905   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:13.877923   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:13.959103   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:13.959154   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:10.705202   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:13.203760   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.205898   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:15.361423   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:17.363341   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:14.883352   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.886433   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.382478   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:16.502589   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:16.519996   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:16.520070   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:16.559001   78377 cri.go:89] found id: ""
	I0422 18:28:16.559029   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.559037   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:16.559043   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:16.559095   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:16.620188   78377 cri.go:89] found id: ""
	I0422 18:28:16.620211   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.620219   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:16.620224   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:16.620283   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:16.670220   78377 cri.go:89] found id: ""
	I0422 18:28:16.670253   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.670264   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:16.670279   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:16.670345   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:16.710931   78377 cri.go:89] found id: ""
	I0422 18:28:16.710962   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.710973   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:16.710980   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:16.711043   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:16.748793   78377 cri.go:89] found id: ""
	I0422 18:28:16.748838   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.748845   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:16.748851   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:16.748904   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:16.785518   78377 cri.go:89] found id: ""
	I0422 18:28:16.785547   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.785554   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:16.785564   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:16.785616   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:16.825141   78377 cri.go:89] found id: ""
	I0422 18:28:16.825174   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.825192   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:16.825200   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:16.825265   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:16.866918   78377 cri.go:89] found id: ""
	I0422 18:28:16.866947   78377 logs.go:276] 0 containers: []
	W0422 18:28:16.866958   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:16.866972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:16.866987   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:16.912589   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:16.912633   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:16.968407   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:16.968446   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:16.983202   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:16.983241   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:17.063852   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:17.063875   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:17.063889   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:19.645012   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:17.703917   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.704958   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.861537   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.862949   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:21.882158   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:23.885280   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:19.659676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:19.659750   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:19.697348   78377 cri.go:89] found id: ""
	I0422 18:28:19.697382   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.697393   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:19.697401   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:19.697461   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:19.738830   78377 cri.go:89] found id: ""
	I0422 18:28:19.738864   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.738876   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:19.738883   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:19.738945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:19.783452   78377 cri.go:89] found id: ""
	I0422 18:28:19.783476   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.783483   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:19.783491   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:19.783554   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:19.826848   78377 cri.go:89] found id: ""
	I0422 18:28:19.826875   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.826886   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:19.826893   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:19.826945   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:19.867207   78377 cri.go:89] found id: ""
	I0422 18:28:19.867229   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.867236   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:19.867242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:19.867298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:19.903752   78377 cri.go:89] found id: ""
	I0422 18:28:19.903783   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.903799   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:19.903806   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:19.903870   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:19.946891   78377 cri.go:89] found id: ""
	I0422 18:28:19.946914   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.946921   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:19.946927   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:19.946997   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:19.989272   78377 cri.go:89] found id: ""
	I0422 18:28:19.989297   78377 logs.go:276] 0 containers: []
	W0422 18:28:19.989304   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:19.989312   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:19.989323   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:20.038854   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:20.038887   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:20.053553   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:20.053584   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:20.132687   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:20.132712   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:20.132727   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:20.209600   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:20.209634   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.752356   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:22.765506   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:22.765567   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:22.804991   78377 cri.go:89] found id: ""
	I0422 18:28:22.805022   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.805029   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:22.805035   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:22.805082   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:22.843726   78377 cri.go:89] found id: ""
	I0422 18:28:22.843757   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.843768   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:22.843775   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:22.843838   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:22.884584   78377 cri.go:89] found id: ""
	I0422 18:28:22.884610   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.884620   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:22.884627   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:22.884701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:22.920974   78377 cri.go:89] found id: ""
	I0422 18:28:22.921004   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.921020   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:22.921028   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:22.921092   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:22.956676   78377 cri.go:89] found id: ""
	I0422 18:28:22.956702   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.956713   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:22.956720   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:22.956784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:22.997517   78377 cri.go:89] found id: ""
	I0422 18:28:22.997545   78377 logs.go:276] 0 containers: []
	W0422 18:28:22.997553   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:22.997559   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:22.997623   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:23.036448   78377 cri.go:89] found id: ""
	I0422 18:28:23.036478   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.036489   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:23.036497   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:23.036561   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:23.075567   78377 cri.go:89] found id: ""
	I0422 18:28:23.075592   78377 logs.go:276] 0 containers: []
	W0422 18:28:23.075600   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:23.075611   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:23.075625   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:23.130372   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:23.130408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:23.147534   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:23.147567   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:23.222730   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:23.222753   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:23.222765   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:23.301972   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:23.302006   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:22.204356   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.703765   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:24.361251   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:26.862825   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.886291   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:28.382905   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:25.847521   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:25.861780   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:25.861867   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:25.899314   78377 cri.go:89] found id: ""
	I0422 18:28:25.899341   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.899349   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:25.899355   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:25.899412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:25.940057   78377 cri.go:89] found id: ""
	I0422 18:28:25.940088   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.940099   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:25.940106   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:25.940163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:25.974923   78377 cri.go:89] found id: ""
	I0422 18:28:25.974951   78377 logs.go:276] 0 containers: []
	W0422 18:28:25.974959   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:25.974968   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:25.975041   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:26.012533   78377 cri.go:89] found id: ""
	I0422 18:28:26.012559   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.012566   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:26.012572   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:26.012620   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:26.049804   78377 cri.go:89] found id: ""
	I0422 18:28:26.049828   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.049835   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:26.049841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:26.049888   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:26.092803   78377 cri.go:89] found id: ""
	I0422 18:28:26.092830   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.092842   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:26.092850   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:26.092919   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:26.130442   78377 cri.go:89] found id: ""
	I0422 18:28:26.130471   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.130480   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:26.130487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:26.130544   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:26.165933   78377 cri.go:89] found id: ""
	I0422 18:28:26.165957   78377 logs.go:276] 0 containers: []
	W0422 18:28:26.165966   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:26.165974   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:26.165986   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:26.245237   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:26.245259   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:26.245278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:26.330143   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:26.330181   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.372178   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:26.372204   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:26.429779   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:26.429817   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:28.945985   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:28.960470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:28.960546   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:28.999618   78377 cri.go:89] found id: ""
	I0422 18:28:28.999639   78377 logs.go:276] 0 containers: []
	W0422 18:28:28.999648   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:28.999653   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:28.999711   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:29.034177   78377 cri.go:89] found id: ""
	I0422 18:28:29.034211   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.034220   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:29.034225   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:29.034286   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:29.073759   78377 cri.go:89] found id: ""
	I0422 18:28:29.073782   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.073790   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:29.073796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:29.073857   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:29.111898   78377 cri.go:89] found id: ""
	I0422 18:28:29.111929   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.111941   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:29.111948   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:29.112005   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:29.148486   78377 cri.go:89] found id: ""
	I0422 18:28:29.148520   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.148531   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:29.148539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:29.148602   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:29.186715   78377 cri.go:89] found id: ""
	I0422 18:28:29.186743   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.186753   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:29.186759   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:29.186805   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:29.226387   78377 cri.go:89] found id: ""
	I0422 18:28:29.226422   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.226433   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:29.226440   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:29.226508   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:29.274102   78377 cri.go:89] found id: ""
	I0422 18:28:29.274131   78377 logs.go:276] 0 containers: []
	W0422 18:28:29.274142   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:29.274152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:29.274165   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:29.333066   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:29.333104   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:29.348376   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:29.348411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:29.422976   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:29.423009   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:29.423022   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:29.501211   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:29.501253   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:26.705590   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.205641   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:29.361439   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:31.361534   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:30.383502   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.887006   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:32.048316   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:32.063859   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:32.063934   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:32.104527   78377 cri.go:89] found id: ""
	I0422 18:28:32.104560   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.104571   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:32.104580   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:32.104645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:32.142945   78377 cri.go:89] found id: ""
	I0422 18:28:32.142976   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.142984   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:32.142990   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:32.143036   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:32.182359   78377 cri.go:89] found id: ""
	I0422 18:28:32.182385   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.182393   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:32.182399   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:32.182446   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:32.223041   78377 cri.go:89] found id: ""
	I0422 18:28:32.223069   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.223077   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:32.223083   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:32.223161   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:32.261892   78377 cri.go:89] found id: ""
	I0422 18:28:32.261924   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.261936   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:32.261943   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:32.262008   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:32.307497   78377 cri.go:89] found id: ""
	I0422 18:28:32.307527   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.307537   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:32.307546   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:32.307617   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:32.345180   78377 cri.go:89] found id: ""
	I0422 18:28:32.345214   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.345227   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:32.345235   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:32.345299   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:32.385999   78377 cri.go:89] found id: ""
	I0422 18:28:32.386025   78377 logs.go:276] 0 containers: []
	W0422 18:28:32.386033   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:32.386041   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:32.386053   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:32.444377   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:32.444436   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:32.460566   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:32.460594   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:32.535839   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:32.535860   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:32.535872   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:32.621998   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:32.622039   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:31.704145   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.704841   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:33.860769   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.860833   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.861583   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.382871   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:37.383164   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:35.165079   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:35.178804   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:35.178877   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:35.221032   78377 cri.go:89] found id: ""
	I0422 18:28:35.221065   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.221076   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:35.221083   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:35.221170   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:35.262550   78377 cri.go:89] found id: ""
	I0422 18:28:35.262573   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.262583   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:35.262589   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:35.262651   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:35.301799   78377 cri.go:89] found id: ""
	I0422 18:28:35.301826   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.301834   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:35.301840   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:35.301901   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:35.340606   78377 cri.go:89] found id: ""
	I0422 18:28:35.340635   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.340642   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:35.340647   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:35.340695   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:35.386226   78377 cri.go:89] found id: ""
	I0422 18:28:35.386251   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.386261   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:35.386268   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:35.386330   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:35.424555   78377 cri.go:89] found id: ""
	I0422 18:28:35.424584   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.424594   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:35.424601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:35.424662   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:35.465856   78377 cri.go:89] found id: ""
	I0422 18:28:35.465886   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.465895   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:35.465901   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:35.465963   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:35.504849   78377 cri.go:89] found id: ""
	I0422 18:28:35.504877   78377 logs.go:276] 0 containers: []
	W0422 18:28:35.504887   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:35.504898   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:35.504931   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:35.579177   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:35.579202   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:35.579217   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:35.656322   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:35.656359   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:35.700376   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:35.700411   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:35.753742   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:35.753776   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.269536   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:38.285945   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:38.286019   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:38.324408   78377 cri.go:89] found id: ""
	I0422 18:28:38.324441   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.324461   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:38.324468   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:38.324539   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:38.362320   78377 cri.go:89] found id: ""
	I0422 18:28:38.362343   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.362350   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:38.362363   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:38.362411   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:38.404208   78377 cri.go:89] found id: ""
	I0422 18:28:38.404234   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.404243   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:38.404248   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:38.404309   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:38.448250   78377 cri.go:89] found id: ""
	I0422 18:28:38.448314   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.448325   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:38.448332   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:38.448397   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:38.485803   78377 cri.go:89] found id: ""
	I0422 18:28:38.485836   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.485848   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:38.485856   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:38.485915   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:38.525903   78377 cri.go:89] found id: ""
	I0422 18:28:38.525933   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.525943   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:38.525952   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:38.526031   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:38.562638   78377 cri.go:89] found id: ""
	I0422 18:28:38.562664   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.562672   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:38.562677   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:38.562726   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:38.603614   78377 cri.go:89] found id: ""
	I0422 18:28:38.603642   78377 logs.go:276] 0 containers: []
	W0422 18:28:38.603653   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:38.603662   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:38.603673   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:38.658054   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:38.658086   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:38.674884   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:38.674908   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:38.748462   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:38.748502   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:38.748528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:38.826701   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:38.826741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:36.204210   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:38.205076   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:40.360574   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.862692   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:39.882407   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.882939   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:43.883102   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:41.374075   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:41.389161   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:41.389235   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:41.427033   78377 cri.go:89] found id: ""
	I0422 18:28:41.427064   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.427075   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:41.427096   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:41.427178   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:41.465376   78377 cri.go:89] found id: ""
	I0422 18:28:41.465408   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.465419   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:41.465427   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:41.465512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:41.502451   78377 cri.go:89] found id: ""
	I0422 18:28:41.502482   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.502490   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:41.502501   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:41.502563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:41.538748   78377 cri.go:89] found id: ""
	I0422 18:28:41.538784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.538796   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:41.538803   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:41.538862   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:41.576877   78377 cri.go:89] found id: ""
	I0422 18:28:41.576928   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.576941   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:41.576949   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:41.577010   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:41.615062   78377 cri.go:89] found id: ""
	I0422 18:28:41.615094   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.615105   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:41.615113   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:41.615190   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:41.656757   78377 cri.go:89] found id: ""
	I0422 18:28:41.656784   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.656792   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:41.656796   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:41.656861   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:41.694351   78377 cri.go:89] found id: ""
	I0422 18:28:41.694374   78377 logs.go:276] 0 containers: []
	W0422 18:28:41.694382   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:41.694390   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:41.694402   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:41.775490   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:41.775528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:41.820152   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:41.820182   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:41.874035   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:41.874071   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:41.889510   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:41.889534   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:41.967706   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:44.468471   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:44.483108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:44.483202   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:44.522503   78377 cri.go:89] found id: ""
	I0422 18:28:44.522528   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.522536   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:44.522542   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:44.522590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:44.562004   78377 cri.go:89] found id: ""
	I0422 18:28:44.562028   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.562036   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:44.562042   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:44.562098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:44.608907   78377 cri.go:89] found id: ""
	I0422 18:28:44.608944   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.608955   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:44.608964   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:44.609027   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:44.651192   78377 cri.go:89] found id: ""
	I0422 18:28:44.651225   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.651235   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:44.651242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:44.651304   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:40.703806   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:42.704426   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.707600   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.361890   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.860686   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:45.883300   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:47.884863   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:44.693057   78377 cri.go:89] found id: ""
	I0422 18:28:44.693095   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.693102   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:44.693108   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:44.693152   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:44.731029   78377 cri.go:89] found id: ""
	I0422 18:28:44.731070   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.731079   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:44.731092   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:44.731165   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:44.768935   78377 cri.go:89] found id: ""
	I0422 18:28:44.768964   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.768985   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:44.768993   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:44.769044   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:44.814942   78377 cri.go:89] found id: ""
	I0422 18:28:44.814966   78377 logs.go:276] 0 containers: []
	W0422 18:28:44.814984   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:44.814992   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:44.815012   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:44.872586   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:44.872612   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:44.929068   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:44.929125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:44.945931   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:44.945960   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:45.019871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:45.019907   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:45.019922   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:47.601880   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:47.616133   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:47.616219   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:47.656526   78377 cri.go:89] found id: ""
	I0422 18:28:47.656547   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.656554   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:47.656560   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:47.656618   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:47.696580   78377 cri.go:89] found id: ""
	I0422 18:28:47.696609   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.696619   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:47.696626   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:47.696684   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:47.737309   78377 cri.go:89] found id: ""
	I0422 18:28:47.737340   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.737351   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:47.737359   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:47.737413   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:47.774541   78377 cri.go:89] found id: ""
	I0422 18:28:47.774572   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.774583   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:47.774591   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:47.774652   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:47.810397   78377 cri.go:89] found id: ""
	I0422 18:28:47.810429   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.810437   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:47.810444   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:47.810506   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:47.847293   78377 cri.go:89] found id: ""
	I0422 18:28:47.847327   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.847337   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:47.847345   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:47.847403   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:47.887454   78377 cri.go:89] found id: ""
	I0422 18:28:47.887476   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.887486   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:47.887493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:47.887553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:47.926706   78377 cri.go:89] found id: ""
	I0422 18:28:47.926731   78377 logs.go:276] 0 containers: []
	W0422 18:28:47.926740   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:47.926750   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:47.926769   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:48.007354   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:48.007382   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:48.007398   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:48.094355   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:48.094394   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:48.137163   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:48.137194   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:48.187732   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:48.187767   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:47.207153   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.704440   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:49.863696   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.360739   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.384172   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:52.386468   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:50.703686   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:50.717040   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:50.717113   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:50.751573   78377 cri.go:89] found id: ""
	I0422 18:28:50.751598   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.751610   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:50.751617   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:50.751674   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:50.790434   78377 cri.go:89] found id: ""
	I0422 18:28:50.790465   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.790476   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:50.790483   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:50.790537   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:50.852414   78377 cri.go:89] found id: ""
	I0422 18:28:50.852442   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.852451   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:50.852457   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:50.852512   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:50.891439   78377 cri.go:89] found id: ""
	I0422 18:28:50.891470   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.891481   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:50.891488   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:50.891553   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:50.929376   78377 cri.go:89] found id: ""
	I0422 18:28:50.929409   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.929420   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:50.929428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:50.929493   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:50.963919   78377 cri.go:89] found id: ""
	I0422 18:28:50.963949   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.963957   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:50.963963   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:50.964022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:50.998583   78377 cri.go:89] found id: ""
	I0422 18:28:50.998621   78377 logs.go:276] 0 containers: []
	W0422 18:28:50.998632   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:50.998640   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:50.998702   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:51.036477   78377 cri.go:89] found id: ""
	I0422 18:28:51.036504   78377 logs.go:276] 0 containers: []
	W0422 18:28:51.036511   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:51.036519   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:51.036531   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:51.092688   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:51.092735   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.107749   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:51.107778   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:51.185620   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:51.185643   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:51.185665   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:51.268824   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:51.268856   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:53.814341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:53.829048   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:53.829123   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:53.873451   78377 cri.go:89] found id: ""
	I0422 18:28:53.873483   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.873493   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:53.873500   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:53.873564   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:53.915262   78377 cri.go:89] found id: ""
	I0422 18:28:53.915295   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.915306   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:53.915315   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:53.915404   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:53.958526   78377 cri.go:89] found id: ""
	I0422 18:28:53.958556   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.958567   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:53.958575   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:53.958645   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:53.997452   78377 cri.go:89] found id: ""
	I0422 18:28:53.997484   78377 logs.go:276] 0 containers: []
	W0422 18:28:53.997496   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:53.997503   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:53.997563   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:54.035937   78377 cri.go:89] found id: ""
	I0422 18:28:54.035961   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.035970   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:54.035975   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:54.036022   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:54.078858   78377 cri.go:89] found id: ""
	I0422 18:28:54.078885   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.078893   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:54.078898   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:54.078959   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:54.117431   78377 cri.go:89] found id: ""
	I0422 18:28:54.117454   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.117462   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:54.117470   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:54.117516   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:54.156022   78377 cri.go:89] found id: ""
	I0422 18:28:54.156050   78377 logs.go:276] 0 containers: []
	W0422 18:28:54.156059   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:54.156068   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:54.156085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:54.234075   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:54.234095   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:54.234108   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:54.314392   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:54.314430   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:54.359388   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:54.359420   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:54.416412   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:54.416449   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:51.704563   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.206032   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.362075   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.861096   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:54.883667   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:57.386081   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:56.934970   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:28:56.948741   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:28:56.948820   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:28:56.984911   78377 cri.go:89] found id: ""
	I0422 18:28:56.984943   78377 logs.go:276] 0 containers: []
	W0422 18:28:56.984954   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:28:56.984961   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:28:56.985026   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:28:57.022939   78377 cri.go:89] found id: ""
	I0422 18:28:57.022967   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.022980   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:28:57.022986   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:28:57.023033   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:28:57.064582   78377 cri.go:89] found id: ""
	I0422 18:28:57.064606   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.064619   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:28:57.064626   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:28:57.064686   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:28:57.105214   78377 cri.go:89] found id: ""
	I0422 18:28:57.105248   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.105259   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:28:57.105266   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:28:57.105317   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:28:57.142061   78377 cri.go:89] found id: ""
	I0422 18:28:57.142093   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.142104   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:28:57.142112   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:28:57.142176   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:28:57.187628   78377 cri.go:89] found id: ""
	I0422 18:28:57.187658   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.187668   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:28:57.187675   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:28:57.187744   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:28:57.223614   78377 cri.go:89] found id: ""
	I0422 18:28:57.223637   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.223645   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:28:57.223650   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:28:57.223705   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:28:57.261853   78377 cri.go:89] found id: ""
	I0422 18:28:57.261876   78377 logs.go:276] 0 containers: []
	W0422 18:28:57.261883   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:28:57.261890   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:28:57.261902   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:28:57.317980   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:28:57.318017   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:28:57.334434   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:28:57.334469   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:28:57.409639   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:28:57.409664   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:28:57.409680   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:28:57.494197   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:28:57.494240   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:28:56.709043   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.203924   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:58.861932   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.360398   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.360867   77634 pod_ready.go:102] pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace has status "Ready":"False"
	I0422 18:28:59.882692   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:01.883267   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.383872   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:00.069390   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:00.083231   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:00.083307   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:00.123418   78377 cri.go:89] found id: ""
	I0422 18:29:00.123448   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.123459   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:00.123470   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:00.123533   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:00.159047   78377 cri.go:89] found id: ""
	I0422 18:29:00.159070   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.159081   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:00.159087   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:00.159191   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:00.197934   78377 cri.go:89] found id: ""
	I0422 18:29:00.197960   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.198074   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:00.198086   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:00.198164   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:00.235243   78377 cri.go:89] found id: ""
	I0422 18:29:00.235273   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.235281   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:00.235287   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:00.235342   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:00.271866   78377 cri.go:89] found id: ""
	I0422 18:29:00.271901   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.271912   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:00.271921   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:00.271981   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:00.308481   78377 cri.go:89] found id: ""
	I0422 18:29:00.308518   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.308531   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:00.308539   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:00.308590   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:00.343970   78377 cri.go:89] found id: ""
	I0422 18:29:00.343998   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.344009   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:00.344016   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:00.344063   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:00.381443   78377 cri.go:89] found id: ""
	I0422 18:29:00.381462   78377 logs.go:276] 0 containers: []
	W0422 18:29:00.381468   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:00.381475   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:00.381486   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:00.436244   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:00.436278   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:00.451487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:00.451512   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:00.522440   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:00.522467   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:00.522483   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:00.602301   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:00.602333   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:03.141925   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:03.155393   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:03.155470   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:03.192801   78377 cri.go:89] found id: ""
	I0422 18:29:03.192825   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.192832   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:03.192838   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:03.192896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:03.244352   78377 cri.go:89] found id: ""
	I0422 18:29:03.244384   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.244395   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:03.244403   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:03.244466   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:03.303294   78377 cri.go:89] found id: ""
	I0422 18:29:03.303318   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.303326   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:03.303331   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:03.303384   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:03.354236   78377 cri.go:89] found id: ""
	I0422 18:29:03.354267   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.354275   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:03.354282   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:03.354343   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:03.394639   78377 cri.go:89] found id: ""
	I0422 18:29:03.394669   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.394679   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:03.394686   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:03.394754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:03.431362   78377 cri.go:89] found id: ""
	I0422 18:29:03.431408   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.431419   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:03.431428   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:03.431494   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:03.472150   78377 cri.go:89] found id: ""
	I0422 18:29:03.472178   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.472186   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:03.472191   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:03.472253   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:03.508059   78377 cri.go:89] found id: ""
	I0422 18:29:03.508083   78377 logs.go:276] 0 containers: []
	W0422 18:29:03.508091   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:03.508100   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:03.508112   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:03.557491   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:03.557528   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:03.573208   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:03.573245   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:03.643262   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:03.643284   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:03.643295   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:03.726353   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:03.726389   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:01.204827   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:03.204916   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:04.355065   77634 pod_ready.go:81] duration metric: took 4m0.0011361s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:04.355113   77634 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-d8s5p" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:04.355148   77634 pod_ready.go:38] duration metric: took 4m14.498231749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:04.355180   77634 kubeadm.go:591] duration metric: took 4m21.764385121s to restartPrimaryControlPlane
	W0422 18:29:04.355236   77634 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:04.355261   77634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:06.385395   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:08.883604   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:06.270762   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:06.284792   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:06.284866   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:06.324717   78377 cri.go:89] found id: ""
	I0422 18:29:06.324750   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.324762   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:06.324770   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:06.324829   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:06.368279   78377 cri.go:89] found id: ""
	I0422 18:29:06.368311   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.368320   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:06.368326   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:06.368390   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:06.413754   78377 cri.go:89] found id: ""
	I0422 18:29:06.413789   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.413800   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:06.413807   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:06.413864   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:06.453290   78377 cri.go:89] found id: ""
	I0422 18:29:06.453324   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.453335   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:06.453343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:06.453402   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:06.494420   78377 cri.go:89] found id: ""
	I0422 18:29:06.494472   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.494485   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:06.494493   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:06.494547   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:06.533736   78377 cri.go:89] found id: ""
	I0422 18:29:06.533768   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.533776   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:06.533784   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:06.533855   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:06.575873   78377 cri.go:89] found id: ""
	I0422 18:29:06.575899   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.575910   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:06.575917   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:06.575973   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:06.620505   78377 cri.go:89] found id: ""
	I0422 18:29:06.620532   78377 logs.go:276] 0 containers: []
	W0422 18:29:06.620541   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:06.620555   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:06.620569   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:06.701583   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:06.701607   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:06.701621   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:06.789370   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:06.789408   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:06.832879   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:06.832915   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:06.892055   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:06.892085   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:09.409104   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:09.422213   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:09.422287   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:09.463906   78377 cri.go:89] found id: ""
	I0422 18:29:09.463938   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.463949   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:09.463956   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:09.464016   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:09.504600   78377 cri.go:89] found id: ""
	I0422 18:29:09.504626   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.504634   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:09.504640   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:09.504701   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:09.544271   78377 cri.go:89] found id: ""
	I0422 18:29:09.544297   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.544308   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:09.544315   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:09.544385   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:09.584323   78377 cri.go:89] found id: ""
	I0422 18:29:09.584355   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.584367   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:09.584375   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:09.584443   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:09.621595   78377 cri.go:89] found id: ""
	I0422 18:29:09.621622   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.621632   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:09.621638   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:09.621703   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:05.703491   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:07.704534   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.705814   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:11.383569   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:13.883521   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:09.654701   78377 cri.go:89] found id: ""
	I0422 18:29:09.654731   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.654741   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:09.654749   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:09.654809   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:09.691517   78377 cri.go:89] found id: ""
	I0422 18:29:09.691544   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.691555   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:09.691561   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:09.691611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:09.726139   78377 cri.go:89] found id: ""
	I0422 18:29:09.726164   78377 logs.go:276] 0 containers: []
	W0422 18:29:09.726172   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:09.726179   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:09.726192   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:09.796871   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:09.796899   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:09.796920   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:09.876465   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:09.876509   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:09.917893   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:09.917930   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:09.968232   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:09.968273   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:12.484341   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:12.499173   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:12.499243   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:12.536536   78377 cri.go:89] found id: ""
	I0422 18:29:12.536566   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.536577   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:12.536583   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:12.536642   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:12.578616   78377 cri.go:89] found id: ""
	I0422 18:29:12.578645   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.578655   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:12.578663   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:12.578742   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:12.615437   78377 cri.go:89] found id: ""
	I0422 18:29:12.615464   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.615475   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:12.615483   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:12.615552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:12.652622   78377 cri.go:89] found id: ""
	I0422 18:29:12.652647   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.652655   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:12.652661   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:12.652717   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:12.687831   78377 cri.go:89] found id: ""
	I0422 18:29:12.687863   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.687886   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:12.687895   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:12.687968   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:12.725695   78377 cri.go:89] found id: ""
	I0422 18:29:12.725727   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.725734   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:12.725740   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:12.725801   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:12.764633   78377 cri.go:89] found id: ""
	I0422 18:29:12.764660   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.764669   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:12.764676   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:12.764754   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:12.803161   78377 cri.go:89] found id: ""
	I0422 18:29:12.803188   78377 logs.go:276] 0 containers: []
	W0422 18:29:12.803199   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:12.803209   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:12.803225   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:12.874276   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:12.874298   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:12.874311   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:12.961086   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:12.961123   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:13.009108   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:13.009134   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:13.060695   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:13.060741   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:11.706608   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:14.204779   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:16.384284   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.884060   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:15.578465   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:15.592781   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:15.592847   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:15.630723   78377 cri.go:89] found id: ""
	I0422 18:29:15.630763   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.630775   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:15.630784   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:15.630848   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:15.672656   78377 cri.go:89] found id: ""
	I0422 18:29:15.672682   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.672689   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:15.672694   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:15.672743   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:15.718081   78377 cri.go:89] found id: ""
	I0422 18:29:15.718107   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.718115   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:15.718120   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:15.718168   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:15.757204   78377 cri.go:89] found id: ""
	I0422 18:29:15.757229   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.757237   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:15.757242   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:15.757289   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:15.793481   78377 cri.go:89] found id: ""
	I0422 18:29:15.793507   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.793515   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:15.793520   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:15.793571   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:15.831366   78377 cri.go:89] found id: ""
	I0422 18:29:15.831414   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.831435   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:15.831443   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:15.831510   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:15.868553   78377 cri.go:89] found id: ""
	I0422 18:29:15.868583   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.868593   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:15.868601   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:15.868657   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:15.908487   78377 cri.go:89] found id: ""
	I0422 18:29:15.908517   78377 logs.go:276] 0 containers: []
	W0422 18:29:15.908527   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:15.908538   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:15.908553   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:15.923479   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:15.923507   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:15.995109   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:15.995156   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:15.995172   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:16.074773   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:16.074812   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.122088   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:16.122114   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:18.674525   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:18.688006   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:18.688077   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:18.726070   78377 cri.go:89] found id: ""
	I0422 18:29:18.726101   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.726114   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:18.726122   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:18.726183   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:18.762885   78377 cri.go:89] found id: ""
	I0422 18:29:18.762916   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.762928   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:18.762936   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:18.762996   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:18.802266   78377 cri.go:89] found id: ""
	I0422 18:29:18.802289   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.802297   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:18.802302   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:18.802349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:18.841407   78377 cri.go:89] found id: ""
	I0422 18:29:18.841445   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.841453   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:18.841459   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:18.841515   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:18.877234   78377 cri.go:89] found id: ""
	I0422 18:29:18.877308   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.877330   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:18.877343   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:18.877410   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:18.917025   78377 cri.go:89] found id: ""
	I0422 18:29:18.917056   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.917063   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:18.917068   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:18.917124   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:18.954201   78377 cri.go:89] found id: ""
	I0422 18:29:18.954228   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.954235   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:18.954241   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:18.954298   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:18.992427   78377 cri.go:89] found id: ""
	I0422 18:29:18.992454   78377 logs.go:276] 0 containers: []
	W0422 18:29:18.992463   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:18.992471   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:18.992482   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:19.041093   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:19.041125   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:19.056711   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:19.056742   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:19.142569   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:19.142593   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:19.142604   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:19.217815   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:19.217855   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:16.704652   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:18.704899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:21.391438   77929 pod_ready.go:102] pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:22.376750   77929 pod_ready.go:81] duration metric: took 4m0.000534542s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" ...
	E0422 18:29:22.376787   77929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-l5qqw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:29:22.376811   77929 pod_ready.go:38] duration metric: took 4m11.560762914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:22.376844   77929 kubeadm.go:591] duration metric: took 4m19.827120959s to restartPrimaryControlPlane
	W0422 18:29:22.376929   77929 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:22.376953   77929 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:21.767953   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:21.783373   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:21.783428   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:21.821614   78377 cri.go:89] found id: ""
	I0422 18:29:21.821644   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.821656   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:21.821664   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:21.821725   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:21.857122   78377 cri.go:89] found id: ""
	I0422 18:29:21.857151   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.857161   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:21.857168   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:21.857228   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:21.894803   78377 cri.go:89] found id: ""
	I0422 18:29:21.894825   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.894833   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:21.894841   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:21.894896   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:21.933665   78377 cri.go:89] found id: ""
	I0422 18:29:21.933701   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.933712   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:21.933723   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:21.933787   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:21.973071   78377 cri.go:89] found id: ""
	I0422 18:29:21.973113   78377 logs.go:276] 0 containers: []
	W0422 18:29:21.973125   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:21.973143   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:21.973210   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:22.011359   78377 cri.go:89] found id: ""
	I0422 18:29:22.011391   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.011403   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:22.011410   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:22.011488   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:22.049681   78377 cri.go:89] found id: ""
	I0422 18:29:22.049709   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.049716   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:22.049721   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:22.049782   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:22.088347   78377 cri.go:89] found id: ""
	I0422 18:29:22.088375   78377 logs.go:276] 0 containers: []
	W0422 18:29:22.088386   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:22.088396   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:22.088410   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:22.142224   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:22.142267   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:22.156643   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:22.156668   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:22.231849   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:22.231879   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:22.231892   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:22.313426   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:22.313470   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:21.203699   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:23.204704   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:25.206832   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:24.863473   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:24.882024   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:29:24.882098   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:29:24.924050   78377 cri.go:89] found id: ""
	I0422 18:29:24.924081   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.924092   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:29:24.924100   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:29:24.924163   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:29:24.976296   78377 cri.go:89] found id: ""
	I0422 18:29:24.976326   78377 logs.go:276] 0 containers: []
	W0422 18:29:24.976335   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:29:24.976345   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:29:24.976412   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:29:25.029222   78377 cri.go:89] found id: ""
	I0422 18:29:25.029251   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.029272   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:29:25.029280   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:29:25.029349   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:29:25.077673   78377 cri.go:89] found id: ""
	I0422 18:29:25.077706   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.077717   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:29:25.077724   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:29:25.077784   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:29:25.125043   78377 cri.go:89] found id: ""
	I0422 18:29:25.125078   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.125090   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:29:25.125098   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:29:25.125179   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:29:25.175533   78377 cri.go:89] found id: ""
	I0422 18:29:25.175566   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.175577   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:29:25.175585   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:29:25.175647   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:29:25.221986   78377 cri.go:89] found id: ""
	I0422 18:29:25.222016   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.222024   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:29:25.222030   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:29:25.222091   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:29:25.264497   78377 cri.go:89] found id: ""
	I0422 18:29:25.264536   78377 logs.go:276] 0 containers: []
	W0422 18:29:25.264547   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:29:25.264558   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:29:25.264574   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:29:25.374379   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:29:25.374438   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:29:25.418690   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:29:25.418726   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:29:25.472266   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:29:25.472300   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:29:25.488487   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:29:25.488582   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:29:25.586957   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0422 18:29:28.087958   78377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:29:28.102224   78377 kubeadm.go:591] duration metric: took 4m2.253635072s to restartPrimaryControlPlane
	W0422 18:29:28.102310   78377 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:29:28.102339   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:29:27.706178   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:30.203899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:31.612457   78377 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.510090318s)
	I0422 18:29:31.612545   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:31.628958   78377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:31.640917   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:31.652696   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:31.652721   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:31.652770   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:31.664114   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:31.664168   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:31.674923   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:31.684843   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:31.684896   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:31.695240   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.706058   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:31.706111   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:31.717091   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:31.727265   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:31.727336   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:31.737801   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:31.812467   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:29:31.812529   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:31.966913   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:31.967059   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:31.967197   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:32.154019   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:32.156034   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:32.156123   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:32.156226   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:32.156318   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:32.156373   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:32.156431   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:32.156486   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:32.156545   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:32.156925   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:32.157393   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:32.157903   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:32.157945   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:32.158030   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:32.431206   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:32.644858   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:32.778777   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:32.983609   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:32.999320   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:32.999451   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:32.999532   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:33.136671   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:33.138828   78377 out.go:204]   - Booting up control plane ...
	I0422 18:29:33.138935   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:33.143714   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:33.145398   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:33.157636   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:33.157801   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:29:32.204107   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:34.707228   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:36.541281   77634 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.185998541s)
	I0422 18:29:36.541367   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:36.558729   77634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:36.569635   77634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:36.579901   77634 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:36.579919   77634 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:36.579959   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:29:36.589540   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:36.589602   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:36.600704   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:29:36.610945   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:36.611012   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:36.621316   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.631251   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:36.631305   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:36.641661   77634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:29:36.650970   77634 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:36.651049   77634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:36.661012   77634 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:36.717676   77634 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:36.717771   77634 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:36.861264   77634 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:36.861404   77634 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:36.861534   77634 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:37.083032   77634 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:37.084958   77634 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:37.085069   77634 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:37.085179   77634 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:37.085296   77634 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:37.085387   77634 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:37.085505   77634 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:37.085579   77634 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:37.085665   77634 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:37.085748   77634 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:37.085869   77634 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:37.085985   77634 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:37.086037   77634 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:37.086114   77634 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:37.337747   77634 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:37.538036   77634 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:37.630303   77634 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:37.755713   77634 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:38.081451   77634 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:38.082265   77634 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:38.084958   77634 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:38.086755   77634 out.go:204]   - Booting up control plane ...
	I0422 18:29:38.086893   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:38.087023   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:38.089714   77634 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:38.108313   77634 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:38.108786   77634 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:38.108849   77634 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:38.241537   77634 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:38.241681   77634 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:37.203550   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:39.205619   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:38.743798   77634 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.847818ms
	I0422 18:29:38.743910   77634 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:44.245440   77634 kubeadm.go:309] [api-check] The API server is healthy after 5.501913498s
	I0422 18:29:44.265283   77634 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:29:44.280940   77634 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:29:44.318688   77634 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:29:44.318990   77634 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-782377 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:29:44.332201   77634 kubeadm.go:309] [bootstrap-token] Using token: o52gh5.f6sjmkidroy1sl61
	I0422 18:29:44.333546   77634 out.go:204]   - Configuring RBAC rules ...
	I0422 18:29:44.333670   77634 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:29:44.342847   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:29:44.350983   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:29:44.354214   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:29:44.361351   77634 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:29:44.365170   77634 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:29:44.654414   77634 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:29:45.170247   77634 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:29:45.654714   77634 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:29:45.654744   77634 kubeadm.go:309] 
	I0422 18:29:45.654847   77634 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:29:45.654871   77634 kubeadm.go:309] 
	I0422 18:29:45.654984   77634 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:29:45.654996   77634 kubeadm.go:309] 
	I0422 18:29:45.655028   77634 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:29:45.655108   77634 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:29:45.655201   77634 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:29:45.655211   77634 kubeadm.go:309] 
	I0422 18:29:45.655308   77634 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:29:45.655317   77634 kubeadm.go:309] 
	I0422 18:29:45.655395   77634 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:29:45.655414   77634 kubeadm.go:309] 
	I0422 18:29:45.655486   77634 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:29:45.655597   77634 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:29:45.655700   77634 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:29:45.655714   77634 kubeadm.go:309] 
	I0422 18:29:45.655824   77634 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:29:45.655951   77634 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:29:45.655963   77634 kubeadm.go:309] 
	I0422 18:29:45.656067   77634 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656226   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:29:45.656258   77634 kubeadm.go:309] 	--control-plane 
	I0422 18:29:45.656265   77634 kubeadm.go:309] 
	I0422 18:29:45.656383   77634 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:29:45.656394   77634 kubeadm.go:309] 
	I0422 18:29:45.656513   77634 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token o52gh5.f6sjmkidroy1sl61 \
	I0422 18:29:45.656602   77634 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:29:45.657124   77634 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:29:45.657152   77634 cni.go:84] Creating CNI manager for ""
	I0422 18:29:45.657168   77634 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:29:45.658873   77634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:29:41.705450   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:44.205661   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:45.660184   77634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:29:45.671834   77634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:29:45.693947   77634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:29:45.694034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:45.694054   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-782377 minikube.k8s.io/updated_at=2024_04_22T18_29_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=embed-certs-782377 minikube.k8s.io/primary=true
	I0422 18:29:45.901437   77634 ops.go:34] apiserver oom_adj: -16
	I0422 18:29:45.901443   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.402050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.902222   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.402527   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:47.901535   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:46.206598   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.703899   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:48.401738   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:48.902497   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.402046   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:49.901756   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.402023   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:50.901600   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.401905   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:51.901739   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.401859   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:52.902155   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.661872   77929 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.28489375s)
	I0422 18:29:54.661952   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:29:54.679790   77929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:29:54.689947   77929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:29:54.700173   77929 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:29:54.700191   77929 kubeadm.go:156] found existing configuration files:
	
	I0422 18:29:54.700230   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0422 18:29:54.711462   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:29:54.711519   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:29:54.721157   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0422 18:29:54.730698   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:29:54.730769   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:29:54.740596   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.750450   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:29:54.750521   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:29:54.760582   77929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0422 18:29:54.770551   77929 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:29:54.770608   77929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:29:54.781181   77929 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:29:54.834872   77929 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:29:54.834950   77929 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:29:54.982435   77929 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:29:54.982574   77929 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:29:54.982675   77929 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:29:55.208724   77929 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:29:50.704498   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:53.203270   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.206485   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:29:55.210946   77929 out.go:204]   - Generating certificates and keys ...
	I0422 18:29:55.211072   77929 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:29:55.211180   77929 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:29:55.211326   77929 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:29:55.211425   77929 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:29:55.211546   77929 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:29:55.211655   77929 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:29:55.211746   77929 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:29:55.211831   77929 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:29:55.211932   77929 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:29:55.212028   77929 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:29:55.212076   77929 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:29:55.212150   77929 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:29:55.456090   77929 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:29:55.747103   77929 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:29:55.940962   77929 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:29:56.076850   77929 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:29:56.253326   77929 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:29:56.253921   77929 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:29:56.259311   77929 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:29:53.402196   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:53.902328   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.402353   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:54.901736   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.401514   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:55.902415   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.402371   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:56.902117   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.401817   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:57.902050   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.402034   77634 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:29:58.574005   77634 kubeadm.go:1107] duration metric: took 12.880033802s to wait for elevateKubeSystemPrivileges
	W0422 18:29:58.574051   77634 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:29:58.574061   77634 kubeadm.go:393] duration metric: took 5m16.036878933s to StartCluster
	I0422 18:29:58.574083   77634 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.574173   77634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:29:58.576621   77634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:29:58.576908   77634 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.114 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:29:58.578444   77634 out.go:177] * Verifying Kubernetes components...
	I0422 18:29:58.576967   77634 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:29:58.577120   77634 config.go:182] Loaded profile config "embed-certs-782377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:29:58.579836   77634 addons.go:69] Setting default-storageclass=true in profile "embed-certs-782377"
	I0422 18:29:58.579846   77634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:29:58.579850   77634 addons.go:69] Setting metrics-server=true in profile "embed-certs-782377"
	I0422 18:29:58.579873   77634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-782377"
	I0422 18:29:58.579896   77634 addons.go:234] Setting addon metrics-server=true in "embed-certs-782377"
	W0422 18:29:58.579910   77634 addons.go:243] addon metrics-server should already be in state true
	I0422 18:29:58.579952   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.579841   77634 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-782377"
	I0422 18:29:58.580057   77634 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-782377"
	W0422 18:29:58.580070   77634 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:29:58.580099   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.580279   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580284   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580301   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580308   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.580460   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.580488   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.603276   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34711
	I0422 18:29:58.603459   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45737
	I0422 18:29:58.603483   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0422 18:29:58.607248   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607265   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607392   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.607836   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.607853   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.607983   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.608001   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.608344   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608373   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.608505   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.608932   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.608963   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612034   77634 addons.go:234] Setting addon default-storageclass=true in "embed-certs-782377"
	W0422 18:29:58.612056   77634 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:29:58.612084   77634 host.go:66] Checking if "embed-certs-782377" exists ...
	I0422 18:29:58.612467   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.612485   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.612786   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.612802   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.613185   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.613700   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.613728   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.630170   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0422 18:29:58.630586   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.631061   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.631081   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.631523   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.631693   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.631847   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45337
	I0422 18:29:58.632457   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.632941   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.632966   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.633179   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0422 18:29:58.633322   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.633567   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.633688   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.635830   77634 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:29:58.633856   77634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:29:58.634354   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.636961   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.637004   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:29:58.637027   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:29:58.637045   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.637006   77634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:29:58.637294   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.637508   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.639287   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.640999   77634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:29:58.640236   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:56.261447   77929 out.go:204]   - Booting up control plane ...
	I0422 18:29:56.261539   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:29:56.261635   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:29:56.261736   77929 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:29:56.285519   77929 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:29:56.285675   77929 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:29:56.285752   77929 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:29:56.437635   77929 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:29:56.437767   77929 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:29:56.944001   77929 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 506.500244ms
	I0422 18:29:56.944104   77929 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:29:58.640741   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.642428   77634 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.641034   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.642448   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:29:58.642456   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.642470   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.642590   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.642733   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.642860   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.645684   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646424   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.646469   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.646728   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.646929   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.647079   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.647331   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.657385   77634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0422 18:29:58.658062   77634 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:29:58.658658   77634 main.go:141] libmachine: Using API Version  1
	I0422 18:29:58.658676   77634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:29:58.659065   77634 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:29:58.659314   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetState
	I0422 18:29:58.661001   77634 main.go:141] libmachine: (embed-certs-782377) Calling .DriverName
	I0422 18:29:58.661274   77634 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:58.661292   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:29:58.661309   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHHostname
	I0422 18:29:58.664551   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665005   77634 main.go:141] libmachine: (embed-certs-782377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:0f:f2", ip: ""} in network mk-embed-certs-782377: {Iface:virbr2 ExpiryTime:2024-04-22 19:24:29 +0000 UTC Type:0 Mac:52:54:00:ab:0f:f2 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:embed-certs-782377 Clientid:01:52:54:00:ab:0f:f2}
	I0422 18:29:58.665029   77634 main.go:141] libmachine: (embed-certs-782377) DBG | domain embed-certs-782377 has defined IP address 192.168.50.114 and MAC address 52:54:00:ab:0f:f2 in network mk-embed-certs-782377
	I0422 18:29:58.665185   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHPort
	I0422 18:29:58.665397   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHKeyPath
	I0422 18:29:58.665560   77634 main.go:141] libmachine: (embed-certs-782377) Calling .GetSSHUsername
	I0422 18:29:58.665692   77634 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/embed-certs-782377/id_rsa Username:docker}
	I0422 18:29:58.840086   77634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:29:58.872963   77634 node_ready.go:35] waiting up to 6m0s for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882942   77634 node_ready.go:49] node "embed-certs-782377" has status "Ready":"True"
	I0422 18:29:58.882978   77634 node_ready.go:38] duration metric: took 9.978929ms for node "embed-certs-782377" to be "Ready" ...
	I0422 18:29:58.882990   77634 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:29:58.892484   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:29:58.964679   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:29:58.987690   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:29:59.001748   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:29:59.001776   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:29:59.095009   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:29:59.095039   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:29:59.242427   77634 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.242451   77634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:29:59.321464   77634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:29:59.989825   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025095721s)
	I0422 18:29:59.989883   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.989895   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.989828   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.002098611s)
	I0422 18:29:59.989974   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990005   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990193   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990231   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990239   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990247   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990254   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990306   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:29:59.990341   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990355   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990369   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:29:59.990380   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:29:59.990504   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990523   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:29:59.990572   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:29:59.990588   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.025628   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.025655   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.025970   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.025991   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.434245   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.434287   77634 pod_ready.go:81] duration metric: took 1.54176792s for pod "coredns-7db6d8ff4d-425zd" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.434301   77634 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454521   77634 pod_ready.go:92] pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.454545   77634 pod_ready.go:81] duration metric: took 20.235494ms for pod "coredns-7db6d8ff4d-44bfz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.454557   77634 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.473166   77634 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.151631277s)
	I0422 18:30:00.473225   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473266   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473625   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.473660   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.473683   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.473706   77634 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:00.473719   77634 main.go:141] libmachine: (embed-certs-782377) Calling .Close
	I0422 18:30:00.473998   77634 main.go:141] libmachine: (embed-certs-782377) DBG | Closing plugin on server side
	I0422 18:30:00.474079   77634 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:00.474098   77634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:00.474114   77634 addons.go:470] Verifying addon metrics-server=true in "embed-certs-782377"
	I0422 18:30:00.476224   77634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:29:57.706757   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.206098   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:00.477945   77634 addons.go:505] duration metric: took 1.900979481s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:30:00.493925   77634 pod_ready.go:92] pod "etcd-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.493956   77634 pod_ready.go:81] duration metric: took 39.391277ms for pod "etcd-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.493971   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502733   77634 pod_ready.go:92] pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.502762   77634 pod_ready.go:81] duration metric: took 8.782315ms for pod "kube-apiserver-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.502776   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517227   77634 pod_ready.go:92] pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.517249   77634 pod_ready.go:81] duration metric: took 14.465418ms for pod "kube-controller-manager-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.517260   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881221   77634 pod_ready.go:92] pod "kube-proxy-6qsdm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:00.881245   77634 pod_ready.go:81] duration metric: took 363.979231ms for pod "kube-proxy-6qsdm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:00.881254   77634 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277017   77634 pod_ready.go:92] pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:01.277103   77634 pod_ready.go:81] duration metric: took 395.840808ms for pod "kube-scheduler-embed-certs-782377" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:01.277125   77634 pod_ready.go:38] duration metric: took 2.394112246s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:01.277153   77634 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:01.277240   77634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:01.295278   77634 api_server.go:72] duration metric: took 2.718332063s to wait for apiserver process to appear ...
	I0422 18:30:01.295316   77634 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:01.295345   77634 api_server.go:253] Checking apiserver healthz at https://192.168.50.114:8443/healthz ...
	I0422 18:30:01.299754   77634 api_server.go:279] https://192.168.50.114:8443/healthz returned 200:
	ok
	I0422 18:30:01.300888   77634 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:01.300912   77634 api_server.go:131] duration metric: took 5.588825ms to wait for apiserver health ...
	I0422 18:30:01.300920   77634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:01.480184   77634 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:01.480216   77634 system_pods.go:61] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.480220   77634 system_pods.go:61] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.480224   77634 system_pods.go:61] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.480227   77634 system_pods.go:61] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.480231   77634 system_pods.go:61] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.480234   77634 system_pods.go:61] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.480237   77634 system_pods.go:61] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.480243   77634 system_pods.go:61] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.480246   77634 system_pods.go:61] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.480253   77634 system_pods.go:74] duration metric: took 179.327678ms to wait for pod list to return data ...
	I0422 18:30:01.480260   77634 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:01.676749   77634 default_sa.go:45] found service account: "default"
	I0422 18:30:01.676792   77634 default_sa.go:55] duration metric: took 196.525393ms for default service account to be created ...
	I0422 18:30:01.676805   77634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:01.881811   77634 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:01.881846   77634 system_pods.go:89] "coredns-7db6d8ff4d-425zd" [70c9e268-0ecd-4d68-aac9-b979888bfd95] Running
	I0422 18:30:01.881852   77634 system_pods.go:89] "coredns-7db6d8ff4d-44bfz" [70b8e7df-e60e-441c-8249-5eebb9a4409c] Running
	I0422 18:30:01.881856   77634 system_pods.go:89] "etcd-embed-certs-782377" [4202759e-6e8d-4d1e-b3a9-68d1e7f5d6fb] Running
	I0422 18:30:01.881861   77634 system_pods.go:89] "kube-apiserver-embed-certs-782377" [46a0e7d7-71bb-4a76-a7fb-4edf82649e83] Running
	I0422 18:30:01.881866   77634 system_pods.go:89] "kube-controller-manager-embed-certs-782377" [4399a4f4-8648-4723-a144-2db662ac2a44] Running
	I0422 18:30:01.881871   77634 system_pods.go:89] "kube-proxy-6qsdm" [a79875f5-4fdf-4a0e-9bfc-985fda10a906] Running
	I0422 18:30:01.881875   77634 system_pods.go:89] "kube-scheduler-embed-certs-782377" [7012cd6a-fdc3-4c0e-b205-2b303cbeaa26] Running
	I0422 18:30:01.881884   77634 system_pods.go:89] "metrics-server-569cc877fc-lv49p" [e99119a1-18ac-4ce8-ab9d-5cbbeddc243b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:01.881891   77634 system_pods.go:89] "storage-provisioner" [4f515603-72e0-4408-9180-1010cf97877d] Running
	I0422 18:30:01.881902   77634 system_pods.go:126] duration metric: took 205.08856ms to wait for k8s-apps to be running ...
	I0422 18:30:01.881915   77634 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:01.881971   77634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:01.898653   77634 system_svc.go:56] duration metric: took 16.727076ms WaitForService to wait for kubelet
	I0422 18:30:01.898688   77634 kubeadm.go:576] duration metric: took 3.321747224s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:01.898716   77634 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:02.079527   77634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:02.079552   77634 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:02.079567   77634 node_conditions.go:105] duration metric: took 180.844523ms to run NodePressure ...
	I0422 18:30:02.079581   77634 start.go:240] waiting for startup goroutines ...
	I0422 18:30:02.079590   77634 start.go:245] waiting for cluster config update ...
	I0422 18:30:02.079603   77634 start.go:254] writing updated cluster config ...
	I0422 18:30:02.079881   77634 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:02.131965   77634 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:02.133816   77634 out.go:177] * Done! kubectl is now configured to use "embed-certs-782377" cluster and "default" namespace by default
	I0422 18:30:02.446649   77929 kubeadm.go:309] [api-check] The API server is healthy after 5.502662802s
	I0422 18:30:02.466311   77929 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:02.504029   77929 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:02.586946   77929 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:02.587250   77929 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-856422 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:02.600362   77929 kubeadm.go:309] [bootstrap-token] Using token: f03yx2.2vmzf4rav70vm6gm
	I0422 18:30:02.601830   77929 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:02.601961   77929 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:02.608688   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:02.621264   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:02.625695   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:02.630424   77929 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:02.639203   77929 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:02.856167   77929 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:03.309505   77929 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:03.855419   77929 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:03.855443   77929 kubeadm.go:309] 
	I0422 18:30:03.855541   77929 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:03.855567   77929 kubeadm.go:309] 
	I0422 18:30:03.855643   77929 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:03.855653   77929 kubeadm.go:309] 
	I0422 18:30:03.855688   77929 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:03.855756   77929 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:03.855841   77929 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:03.855854   77929 kubeadm.go:309] 
	I0422 18:30:03.855909   77929 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:03.855915   77929 kubeadm.go:309] 
	I0422 18:30:03.855954   77929 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:03.855960   77929 kubeadm.go:309] 
	I0422 18:30:03.856051   77929 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:03.856171   77929 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:03.856248   77929 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:03.856259   77929 kubeadm.go:309] 
	I0422 18:30:03.856390   77929 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:03.856484   77929 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:03.856496   77929 kubeadm.go:309] 
	I0422 18:30:03.856636   77929 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.856729   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:03.856749   77929 kubeadm.go:309] 	--control-plane 
	I0422 18:30:03.856755   77929 kubeadm.go:309] 
	I0422 18:30:03.856823   77929 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:03.856829   77929 kubeadm.go:309] 
	I0422 18:30:03.856911   77929 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token f03yx2.2vmzf4rav70vm6gm \
	I0422 18:30:03.857040   77929 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:03.857540   77929 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:03.857569   77929 cni.go:84] Creating CNI manager for ""
	I0422 18:30:03.857583   77929 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:03.859350   77929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:03.860736   77929 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:03.873189   77929 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:03.897193   77929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:03.897260   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:03.897317   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-856422 minikube.k8s.io/updated_at=2024_04_22T18_30_03_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=default-k8s-diff-port-856422 minikube.k8s.io/primary=true
	I0422 18:30:04.114339   77929 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:04.114499   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:02.703452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.705502   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:04.615355   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.115530   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:05.614776   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.114991   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:06.614772   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.114921   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.614799   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.115218   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:08.614688   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:09.114578   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:07.203762   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.704636   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:09.615201   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.115526   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:10.614511   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.115041   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:11.615220   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.115463   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:12.614937   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.115470   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.615417   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:14.114916   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:13.158118   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:30:13.158841   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:13.159056   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:11.706452   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.203931   77400 pod_ready.go:102] pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:14.614582   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.115466   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:15.615542   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.115554   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:16.614586   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.114645   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.614945   77929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:17.769793   77929 kubeadm.go:1107] duration metric: took 13.872592974s to wait for elevateKubeSystemPrivileges
	W0422 18:30:17.769857   77929 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:30:17.769869   77929 kubeadm.go:393] duration metric: took 5m15.279261637s to StartCluster
	I0422 18:30:17.769889   77929 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.769958   77929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:30:17.771921   77929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:30:17.772222   77929 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.206 Port:8444 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:30:17.774219   77929 out.go:177] * Verifying Kubernetes components...
	I0422 18:30:17.772365   77929 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:30:17.772496   77929 config.go:182] Loaded profile config "default-k8s-diff-port-856422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:30:17.776231   77929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:30:17.776249   77929 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776267   77929 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776294   77929 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776307   77929 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:30:17.776321   77929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-856422"
	I0422 18:30:17.776343   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776284   77929 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-856422"
	I0422 18:30:17.776412   77929 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.776430   77929 addons.go:243] addon metrics-server should already be in state true
	I0422 18:30:17.776469   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.776775   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776809   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776778   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776846   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.776777   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.776926   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.796665   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41919
	I0422 18:30:17.796701   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0422 18:30:17.796976   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40073
	I0422 18:30:17.797083   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797472   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797609   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.797795   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.797824   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798111   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798141   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798158   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798499   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.798627   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.798648   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.798728   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.798776   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799001   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.799077   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.799107   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.799274   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.803095   77929 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-856422"
	W0422 18:30:17.803141   77929 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:30:17.803175   77929 host.go:66] Checking if "default-k8s-diff-port-856422" exists ...
	I0422 18:30:17.803544   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.803580   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.820753   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I0422 18:30:17.821272   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.821822   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.821839   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.822247   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.822315   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I0422 18:30:17.822640   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.823287   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41435
	I0422 18:30:17.823373   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.823976   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.824141   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824152   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824479   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.824498   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.824561   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.824727   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.825176   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.825646   77929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:30:17.825675   77929 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:30:17.826014   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.828122   77929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:30:17.826808   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.829694   77929 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:17.829711   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:30:17.829729   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.831322   77929 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:30:17.834942   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:30:17.834959   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:30:17.834979   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.833531   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.832894   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835054   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.835078   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.835276   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.835468   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.835674   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.837838   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838180   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.838204   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.838459   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.838656   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.838827   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.838983   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.844804   77929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44667
	I0422 18:30:17.845252   77929 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:30:17.845762   77929 main.go:141] libmachine: Using API Version  1
	I0422 18:30:17.845780   77929 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:30:17.846071   77929 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:30:17.846240   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetState
	I0422 18:30:17.847881   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .DriverName
	I0422 18:30:17.848127   77929 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:17.848142   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:30:17.848159   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHHostname
	I0422 18:30:17.850959   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851369   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:4a:d1", ip: ""} in network mk-default-k8s-diff-port-856422: {Iface:virbr3 ExpiryTime:2024-04-22 19:16:57 +0000 UTC Type:0 Mac:52:54:00:df:4a:d1 Iaid: IPaddr:192.168.61.206 Prefix:24 Hostname:default-k8s-diff-port-856422 Clientid:01:52:54:00:df:4a:d1}
	I0422 18:30:17.851389   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | domain default-k8s-diff-port-856422 has defined IP address 192.168.61.206 and MAC address 52:54:00:df:4a:d1 in network mk-default-k8s-diff-port-856422
	I0422 18:30:17.851548   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHPort
	I0422 18:30:17.851786   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHKeyPath
	I0422 18:30:17.851918   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .GetSSHUsername
	I0422 18:30:17.852081   77929 sshutil.go:53] new ssh client: &{IP:192.168.61.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/default-k8s-diff-port-856422/id_rsa Username:docker}
	I0422 18:30:17.997608   77929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:30:18.066476   77929 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.139937   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:30:18.141619   77929 node_ready.go:49] node "default-k8s-diff-port-856422" has status "Ready":"True"
	I0422 18:30:18.141645   77929 node_ready.go:38] duration metric: took 75.13675ms for node "default-k8s-diff-port-856422" to be "Ready" ...
	I0422 18:30:18.141658   77929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:18.168289   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:18.217351   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:30:18.217374   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:30:18.280089   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:30:18.283704   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:30:18.283734   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:30:18.314907   77929 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.314936   77929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:30:18.379950   77929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:30:18.595931   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.595969   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596350   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596374   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.596389   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.596398   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596660   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.596699   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.596722   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610244   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:18.610277   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:18.610614   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:18.610635   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:18.610659   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:18.159553   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:18.159883   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:19.513892   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.233747961s)
	I0422 18:30:19.513948   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.513961   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514326   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.514460   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.514491   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.514506   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.514414   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517592   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.517601   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.517617   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.805551   77929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.425552646s)
	I0422 18:30:19.805610   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.805621   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.805986   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.806040   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.806064   77929 main.go:141] libmachine: Making call to close driver server
	I0422 18:30:19.806083   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) Calling .Close
	I0422 18:30:19.807818   77929 main.go:141] libmachine: (default-k8s-diff-port-856422) DBG | Closing plugin on server side
	I0422 18:30:19.807865   77929 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:30:19.807874   77929 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:30:19.807889   77929 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-856422"
	I0422 18:30:19.809871   77929 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0422 18:30:15.697614   77400 pod_ready.go:81] duration metric: took 4m0.000479463s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" ...
	E0422 18:30:15.697661   77400 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jmjhm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0422 18:30:15.697678   77400 pod_ready.go:38] duration metric: took 4m9.017394523s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:15.697704   77400 kubeadm.go:591] duration metric: took 4m15.772560858s to restartPrimaryControlPlane
	W0422 18:30:15.697751   77400 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0422 18:30:15.697777   77400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:30:19.811644   77929 addons.go:505] duration metric: took 2.039289124s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0422 18:30:20.174912   77929 pod_ready.go:102] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"False"
	I0422 18:30:20.675213   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.675247   77929 pod_ready.go:81] duration metric: took 2.506921343s for pod "coredns-7db6d8ff4d-jg8h6" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.675261   77929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681665   77929 pod_ready.go:92] pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.681690   77929 pod_ready.go:81] duration metric: took 6.421217ms for pod "coredns-7db6d8ff4d-vc6vz" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.681700   77929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687893   77929 pod_ready.go:92] pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.687926   77929 pod_ready.go:81] duration metric: took 6.218166ms for pod "etcd-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.687941   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696603   77929 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.696634   77929 pod_ready.go:81] duration metric: took 8.684682ms for pod "kube-apiserver-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.696649   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702776   77929 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:20.702800   77929 pod_ready.go:81] duration metric: took 6.141484ms for pod "kube-controller-manager-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:20.702813   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073451   77929 pod_ready.go:92] pod "kube-proxy-4m8cm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.073485   77929 pod_ready.go:81] duration metric: took 370.663669ms for pod "kube-proxy-4m8cm" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.073500   77929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474144   77929 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace has status "Ready":"True"
	I0422 18:30:21.474175   77929 pod_ready.go:81] duration metric: took 400.665802ms for pod "kube-scheduler-default-k8s-diff-port-856422" in "kube-system" namespace to be "Ready" ...
	I0422 18:30:21.474190   77929 pod_ready.go:38] duration metric: took 3.332515716s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:30:21.474207   77929 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:30:21.474273   77929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:30:21.491320   77929 api_server.go:72] duration metric: took 3.719060391s to wait for apiserver process to appear ...
	I0422 18:30:21.491352   77929 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:30:21.491378   77929 api_server.go:253] Checking apiserver healthz at https://192.168.61.206:8444/healthz ...
	I0422 18:30:21.496589   77929 api_server.go:279] https://192.168.61.206:8444/healthz returned 200:
	ok
	I0422 18:30:21.497405   77929 api_server.go:141] control plane version: v1.30.0
	I0422 18:30:21.497426   77929 api_server.go:131] duration metric: took 6.067469ms to wait for apiserver health ...
	I0422 18:30:21.497433   77929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:30:21.675885   77929 system_pods.go:59] 9 kube-system pods found
	I0422 18:30:21.675912   77929 system_pods.go:61] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:21.675916   77929 system_pods.go:61] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:21.675924   77929 system_pods.go:61] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:21.675928   77929 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:21.675932   77929 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:21.675935   77929 system_pods.go:61] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:21.675939   77929 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:21.675945   77929 system_pods.go:61] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:21.675949   77929 system_pods.go:61] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:21.675959   77929 system_pods.go:74] duration metric: took 178.519985ms to wait for pod list to return data ...
	I0422 18:30:21.675965   77929 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:30:21.872358   77929 default_sa.go:45] found service account: "default"
	I0422 18:30:21.872382   77929 default_sa.go:55] duration metric: took 196.412252ms for default service account to be created ...
	I0422 18:30:21.872391   77929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:30:22.075660   77929 system_pods.go:86] 9 kube-system pods found
	I0422 18:30:22.075689   77929 system_pods.go:89] "coredns-7db6d8ff4d-jg8h6" [031f1940-ae96-44ae-a69c-ea0bbdce81fb] Running
	I0422 18:30:22.075694   77929 system_pods.go:89] "coredns-7db6d8ff4d-vc6vz" [8a7134db-ac2b-49d9-ab61-b4acd6ab4d67] Running
	I0422 18:30:22.075698   77929 system_pods.go:89] "etcd-default-k8s-diff-port-856422" [424fe02a-0a23-453d-bcfa-0a2c94a92b98] Running
	I0422 18:30:22.075702   77929 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-856422" [0a9de7a4-2c3f-48c5-aa49-da333a89ddc8] Running
	I0422 18:30:22.075706   77929 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-856422" [c139adc2-672c-4d6b-8149-f8186dc76c30] Running
	I0422 18:30:22.075710   77929 system_pods.go:89] "kube-proxy-4m8cm" [f0673173-2469-4cef-9bef-1bee7504559c] Running
	I0422 18:30:22.075714   77929 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-856422" [595d85b5-f102-4f4f-9fad-20a131156bdf] Running
	I0422 18:30:22.075722   77929 system_pods.go:89] "metrics-server-569cc877fc-jmdnk" [54d9a335-db4a-417d-9909-256d3a2b7fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:30:22.075726   77929 system_pods.go:89] "storage-provisioner" [9998f3b2-a39c-4b2c-a7c2-f02aec08f548] Running
	I0422 18:30:22.075735   77929 system_pods.go:126] duration metric: took 203.339608ms to wait for k8s-apps to be running ...
	I0422 18:30:22.075742   77929 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:30:22.075785   77929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:22.091186   77929 system_svc.go:56] duration metric: took 15.433207ms WaitForService to wait for kubelet
	I0422 18:30:22.091219   77929 kubeadm.go:576] duration metric: took 4.318966383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:30:22.091237   77929 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:30:22.272944   77929 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:30:22.272971   77929 node_conditions.go:123] node cpu capacity is 2
	I0422 18:30:22.272980   77929 node_conditions.go:105] duration metric: took 181.734735ms to run NodePressure ...
	I0422 18:30:22.272991   77929 start.go:240] waiting for startup goroutines ...
	I0422 18:30:22.273000   77929 start.go:245] waiting for cluster config update ...
	I0422 18:30:22.273010   77929 start.go:254] writing updated cluster config ...
	I0422 18:30:22.273248   77929 ssh_runner.go:195] Run: rm -f paused
	I0422 18:30:22.323725   77929 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:30:22.325876   77929 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-856422" cluster and "default" namespace by default
	I0422 18:30:28.159925   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:28.160147   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.161034   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:30:48.161430   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:30:48.109960   77400 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.41215685s)
	I0422 18:30:48.110037   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:30:48.127246   77400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 18:30:48.138280   77400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:30:48.148521   77400 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:30:48.148545   77400 kubeadm.go:156] found existing configuration files:
	
	I0422 18:30:48.148588   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:30:48.160411   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:30:48.160483   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:30:48.170748   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:30:48.180399   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:30:48.180451   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:30:48.192521   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.202200   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:30:48.202274   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:30:48.212241   77400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:30:48.221754   77400 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:30:48.221821   77400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:30:48.231555   77400 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:30:48.456873   77400 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:30:57.943980   77400 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 18:30:57.944080   77400 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:30:57.944182   77400 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:30:57.944305   77400 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:30:57.944411   77400 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:30:57.944499   77400 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:30:57.946110   77400 out.go:204]   - Generating certificates and keys ...
	I0422 18:30:57.946192   77400 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:30:57.946262   77400 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:30:57.946385   77400 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:30:57.946464   77400 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:30:57.946559   77400 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:30:57.946683   77400 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:30:57.946772   77400 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:30:57.946835   77400 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:30:57.946902   77400 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:30:57.946963   77400 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:30:57.947000   77400 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:30:57.947054   77400 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:30:57.947116   77400 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:30:57.947201   77400 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 18:30:57.947283   77400 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:30:57.947383   77400 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:30:57.947458   77400 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:30:57.947589   77400 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:30:57.947662   77400 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:30:57.949092   77400 out.go:204]   - Booting up control plane ...
	I0422 18:30:57.949194   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:30:57.949279   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:30:57.949336   77400 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:30:57.949419   77400 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:30:57.949505   77400 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:30:57.949544   77400 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:30:57.949664   77400 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 18:30:57.949739   77400 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 18:30:57.949794   77400 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 502.588061ms
	I0422 18:30:57.949862   77400 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 18:30:57.949957   77400 kubeadm.go:309] [api-check] The API server is healthy after 5.510546703s
	I0422 18:30:57.950048   77400 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 18:30:57.950152   77400 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 18:30:57.950204   77400 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 18:30:57.950352   77400 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-407991 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 18:30:57.950453   77400 kubeadm.go:309] [bootstrap-token] Using token: cwotot.4qmmrydp0nd6w5tq
	I0422 18:30:57.951938   77400 out.go:204]   - Configuring RBAC rules ...
	I0422 18:30:57.952040   77400 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 18:30:57.952134   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 18:30:57.952285   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 18:30:57.952410   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 18:30:57.952535   77400 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 18:30:57.952666   77400 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 18:30:57.952799   77400 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 18:30:57.952867   77400 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 18:30:57.952936   77400 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 18:30:57.952952   77400 kubeadm.go:309] 
	I0422 18:30:57.953013   77400 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 18:30:57.953019   77400 kubeadm.go:309] 
	I0422 18:30:57.953084   77400 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 18:30:57.953090   77400 kubeadm.go:309] 
	I0422 18:30:57.953110   77400 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 18:30:57.953199   77400 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 18:30:57.953281   77400 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 18:30:57.953289   77400 kubeadm.go:309] 
	I0422 18:30:57.953374   77400 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 18:30:57.953381   77400 kubeadm.go:309] 
	I0422 18:30:57.953453   77400 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 18:30:57.953461   77400 kubeadm.go:309] 
	I0422 18:30:57.953538   77400 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 18:30:57.953636   77400 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 18:30:57.953719   77400 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 18:30:57.953726   77400 kubeadm.go:309] 
	I0422 18:30:57.953813   77400 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 18:30:57.953919   77400 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 18:30:57.953930   77400 kubeadm.go:309] 
	I0422 18:30:57.954047   77400 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954187   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b \
	I0422 18:30:57.954222   77400 kubeadm.go:309] 	--control-plane 
	I0422 18:30:57.954232   77400 kubeadm.go:309] 
	I0422 18:30:57.954364   77400 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 18:30:57.954374   77400 kubeadm.go:309] 
	I0422 18:30:57.954440   77400 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token cwotot.4qmmrydp0nd6w5tq \
	I0422 18:30:57.954553   77400 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:705adc20a86f77f4cac73b2380cc3570cdfc4e09b1082339848be1805dda657b 
	I0422 18:30:57.954574   77400 cni.go:84] Creating CNI manager for ""
	I0422 18:30:57.954583   77400 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 18:30:57.956278   77400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 18:30:57.957592   77400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 18:30:57.970080   77400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 18:30:57.991711   77400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 18:30:57.991779   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:57.991780   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-407991 minikube.k8s.io/updated_at=2024_04_22T18_30_57_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=no-preload-407991 minikube.k8s.io/primary=true
	I0422 18:30:58.232025   77400 ops.go:34] apiserver oom_adj: -16
	I0422 18:30:58.232162   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:58.732395   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.232855   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:30:59.732187   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.232654   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:00.732995   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.232856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:01.732735   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.232474   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:02.732930   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.232411   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:03.732457   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.232888   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:04.732856   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.232873   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:05.733177   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.232682   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:06.733241   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.232711   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:07.732922   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.232815   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:08.732377   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.232576   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:09.732243   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.232350   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:10.732764   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.232338   77400 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 18:31:11.357414   77400 kubeadm.go:1107] duration metric: took 13.365692776s to wait for elevateKubeSystemPrivileges
	W0422 18:31:11.357460   77400 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 18:31:11.357472   77400 kubeadm.go:393] duration metric: took 5m11.48385131s to StartCluster
	I0422 18:31:11.357493   77400 settings.go:142] acquiring lock: {Name:mkce29494d583a7652e3329e9ed33ac4897018b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.357565   77400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:31:11.359176   77400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-11572/kubeconfig: {Name:mkcbf98ec9962144e8687b3db86ba4e5163b0669 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:31:11.359391   77400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.164 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0422 18:31:11.360948   77400 out.go:177] * Verifying Kubernetes components...
	I0422 18:31:11.359461   77400 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:31:11.359641   77400 config.go:182] Loaded profile config "no-preload-407991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:31:11.362433   77400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:31:11.362446   77400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-407991"
	I0422 18:31:11.362464   77400 addons.go:69] Setting default-storageclass=true in profile "no-preload-407991"
	I0422 18:31:11.362486   77400 addons.go:69] Setting metrics-server=true in profile "no-preload-407991"
	I0422 18:31:11.362495   77400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-407991"
	I0422 18:31:11.362500   77400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-407991"
	I0422 18:31:11.362515   77400 addons.go:234] Setting addon metrics-server=true in "no-preload-407991"
	W0422 18:31:11.362527   77400 addons.go:243] addon metrics-server should already be in state true
	W0422 18:31:11.362506   77400 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:31:11.362557   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362567   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.362929   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362932   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.362963   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362971   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.362974   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.363144   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.379089   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I0422 18:31:11.379582   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.380121   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.380145   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.380496   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.381098   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.381132   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.383229   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0422 18:31:11.383513   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0422 18:31:11.383642   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.383977   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.384136   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384148   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384552   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.384754   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.384770   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.384801   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.385103   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.386102   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.386130   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.388554   77400 addons.go:234] Setting addon default-storageclass=true in "no-preload-407991"
	W0422 18:31:11.388569   77400 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:31:11.388589   77400 host.go:66] Checking if "no-preload-407991" exists ...
	I0422 18:31:11.388921   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.388938   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.401669   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36917
	I0422 18:31:11.402268   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.402852   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.402869   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.403427   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.403610   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.404849   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 18:31:11.405356   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.405588   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.406112   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.406129   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.407696   77400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:31:11.406649   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.409174   77400 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.409195   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:31:11.409214   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.409261   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.411378   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.412836   77400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:31:11.411939   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0422 18:31:11.414011   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:31:11.414027   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:31:11.413155   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.414045   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.414069   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.413487   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.414097   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.413841   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.414686   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.414781   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.414794   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.414871   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.415256   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.415607   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.416288   77400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 18:31:11.416343   77400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 18:31:11.417257   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417623   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.417644   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.417898   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.418074   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.418325   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.418468   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.432218   77400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0422 18:31:11.432682   77400 main.go:141] libmachine: () Calling .GetVersion
	I0422 18:31:11.433096   77400 main.go:141] libmachine: Using API Version  1
	I0422 18:31:11.433108   77400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 18:31:11.433685   77400 main.go:141] libmachine: () Calling .GetMachineName
	I0422 18:31:11.433887   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetState
	I0422 18:31:11.435675   77400 main.go:141] libmachine: (no-preload-407991) Calling .DriverName
	I0422 18:31:11.435931   77400 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.435952   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:31:11.435969   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHHostname
	I0422 18:31:11.438700   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439107   77400 main.go:141] libmachine: (no-preload-407991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:e4:a0", ip: ""} in network mk-no-preload-407991: {Iface:virbr1 ExpiryTime:2024-04-22 19:15:51 +0000 UTC Type:0 Mac:52:54:00:a4:e4:a0 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:no-preload-407991 Clientid:01:52:54:00:a4:e4:a0}
	I0422 18:31:11.439144   77400 main.go:141] libmachine: (no-preload-407991) DBG | domain no-preload-407991 has defined IP address 192.168.39.164 and MAC address 52:54:00:a4:e4:a0 in network mk-no-preload-407991
	I0422 18:31:11.439237   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHPort
	I0422 18:31:11.439482   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHKeyPath
	I0422 18:31:11.439662   77400 main.go:141] libmachine: (no-preload-407991) Calling .GetSSHUsername
	I0422 18:31:11.439833   77400 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/no-preload-407991/id_rsa Username:docker}
	I0422 18:31:11.610190   77400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:31:11.654061   77400 node_ready.go:35] waiting up to 6m0s for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663869   77400 node_ready.go:49] node "no-preload-407991" has status "Ready":"True"
	I0422 18:31:11.663904   77400 node_ready.go:38] duration metric: took 9.806821ms for node "no-preload-407991" to be "Ready" ...
	I0422 18:31:11.663917   77400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:11.673895   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:11.752785   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:31:11.770023   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:31:11.770054   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:31:11.799895   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:31:11.872083   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:31:11.872113   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:31:11.984597   77400 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:11.984626   77400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:31:12.059137   77400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:31:13.130584   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330646778s)
	I0422 18:31:13.130694   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130718   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.130716   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.37789401s)
	I0422 18:31:13.130833   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.130847   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131067   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131135   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131159   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131172   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131289   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131304   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131312   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.131319   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.131327   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.131559   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131574   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131601   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.131621   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.131621   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.173181   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.173205   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.173478   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.173501   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.279764   77400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.220585481s)
	I0422 18:31:13.279813   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.279828   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280221   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280241   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280261   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280276   77400 main.go:141] libmachine: Making call to close driver server
	I0422 18:31:13.280290   77400 main.go:141] libmachine: (no-preload-407991) Calling .Close
	I0422 18:31:13.280532   77400 main.go:141] libmachine: (no-preload-407991) DBG | Closing plugin on server side
	I0422 18:31:13.280570   77400 main.go:141] libmachine: Successfully made call to close driver server
	I0422 18:31:13.280577   77400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0422 18:31:13.280586   77400 addons.go:470] Verifying addon metrics-server=true in "no-preload-407991"
	I0422 18:31:13.282757   77400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0422 18:31:13.284029   77400 addons.go:505] duration metric: took 1.924572004s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0422 18:31:13.681968   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.682004   77400 pod_ready.go:81] duration metric: took 2.008061657s for pod "coredns-7db6d8ff4d-9tt8m" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.682017   77400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687240   77400 pod_ready.go:92] pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.687268   77400 pod_ready.go:81] duration metric: took 5.242949ms for pod "coredns-7db6d8ff4d-fclvg" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.687281   77400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693047   77400 pod_ready.go:92] pod "etcd-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.693074   77400 pod_ready.go:81] duration metric: took 5.784769ms for pod "etcd-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.693086   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705008   77400 pod_ready.go:92] pod "kube-apiserver-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.705028   77400 pod_ready.go:81] duration metric: took 11.934672ms for pod "kube-apiserver-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.705037   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721814   77400 pod_ready.go:92] pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:13.721840   77400 pod_ready.go:81] duration metric: took 16.796546ms for pod "kube-controller-manager-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:13.721855   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079660   77400 pod_ready.go:92] pod "kube-proxy-47g8k" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.079681   77400 pod_ready.go:81] duration metric: took 357.819791ms for pod "kube-proxy-47g8k" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.079692   77400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480000   77400 pod_ready.go:92] pod "kube-scheduler-no-preload-407991" in "kube-system" namespace has status "Ready":"True"
	I0422 18:31:14.480026   77400 pod_ready.go:81] duration metric: took 400.326493ms for pod "kube-scheduler-no-preload-407991" in "kube-system" namespace to be "Ready" ...
	I0422 18:31:14.480037   77400 pod_ready.go:38] duration metric: took 2.816106046s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:31:14.480054   77400 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:31:14.480123   77400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:31:14.508798   77400 api_server.go:72] duration metric: took 3.149365253s to wait for apiserver process to appear ...
	I0422 18:31:14.508822   77400 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:31:14.508842   77400 api_server.go:253] Checking apiserver healthz at https://192.168.39.164:8443/healthz ...
	I0422 18:31:14.523293   77400 api_server.go:279] https://192.168.39.164:8443/healthz returned 200:
	ok
	I0422 18:31:14.524410   77400 api_server.go:141] control plane version: v1.30.0
	I0422 18:31:14.524439   77400 api_server.go:131] duration metric: took 15.608906ms to wait for apiserver health ...
	I0422 18:31:14.524448   77400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 18:31:14.682120   77400 system_pods.go:59] 9 kube-system pods found
	I0422 18:31:14.682152   77400 system_pods.go:61] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:14.682157   77400 system_pods.go:61] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:14.682161   77400 system_pods.go:61] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:14.682164   77400 system_pods.go:61] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:14.682169   77400 system_pods.go:61] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:14.682173   77400 system_pods.go:61] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:14.682178   77400 system_pods.go:61] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:14.682188   77400 system_pods.go:61] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:14.682194   77400 system_pods.go:61] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:14.682205   77400 system_pods.go:74] duration metric: took 157.750249ms to wait for pod list to return data ...
	I0422 18:31:14.682222   77400 default_sa.go:34] waiting for default service account to be created ...
	I0422 18:31:14.878556   77400 default_sa.go:45] found service account: "default"
	I0422 18:31:14.878581   77400 default_sa.go:55] duration metric: took 196.353021ms for default service account to be created ...
	I0422 18:31:14.878590   77400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 18:31:15.081385   77400 system_pods.go:86] 9 kube-system pods found
	I0422 18:31:15.081415   77400 system_pods.go:89] "coredns-7db6d8ff4d-9tt8m" [42140aad-7ab4-4f46-9f24-0fc8717220f4] Running
	I0422 18:31:15.081425   77400 system_pods.go:89] "coredns-7db6d8ff4d-fclvg" [6e2c4436-1941-4409-8a6b-5f377cb7212c] Running
	I0422 18:31:15.081430   77400 system_pods.go:89] "etcd-no-preload-407991" [ae6e37cd-0564-4ca1-99f1-87834e019e98] Running
	I0422 18:31:15.081434   77400 system_pods.go:89] "kube-apiserver-no-preload-407991" [c59d3076-4de6-4737-a31e-df27cb6b7071] Running
	I0422 18:31:15.081438   77400 system_pods.go:89] "kube-controller-manager-no-preload-407991" [95827f69-45cd-4b37-b4e3-b9d2b9011f58] Running
	I0422 18:31:15.081448   77400 system_pods.go:89] "kube-proxy-47g8k" [9b0f8e68-3a4a-4863-85e7-a5bba444bc39] Running
	I0422 18:31:15.081452   77400 system_pods.go:89] "kube-scheduler-no-preload-407991" [dc06358e-9249-40dd-a9b2-c62915d7aea3] Running
	I0422 18:31:15.081458   77400 system_pods.go:89] "metrics-server-569cc877fc-vrzfj" [b9751edd-f883-48a0-bc18-1dbc9eec191f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 18:31:15.081464   77400 system_pods.go:89] "storage-provisioner" [6c704413-c118-4a17-9a18-e13fd3c092f1] Running
	I0422 18:31:15.081476   77400 system_pods.go:126] duration metric: took 202.881032ms to wait for k8s-apps to be running ...
	I0422 18:31:15.081484   77400 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 18:31:15.081530   77400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:15.098245   77400 system_svc.go:56] duration metric: took 16.748933ms WaitForService to wait for kubelet
	I0422 18:31:15.098278   77400 kubeadm.go:576] duration metric: took 3.738847086s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:31:15.098302   77400 node_conditions.go:102] verifying NodePressure condition ...
	I0422 18:31:15.278812   77400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0422 18:31:15.278839   77400 node_conditions.go:123] node cpu capacity is 2
	I0422 18:31:15.278848   77400 node_conditions.go:105] duration metric: took 180.541553ms to run NodePressure ...
	I0422 18:31:15.278859   77400 start.go:240] waiting for startup goroutines ...
	I0422 18:31:15.278866   77400 start.go:245] waiting for cluster config update ...
	I0422 18:31:15.278875   77400 start.go:254] writing updated cluster config ...
	I0422 18:31:15.279242   77400 ssh_runner.go:195] Run: rm -f paused
	I0422 18:31:15.330788   77400 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 18:31:15.333274   77400 out.go:177] * Done! kubectl is now configured to use "no-preload-407991" cluster and "default" namespace by default
	I0422 18:31:28.163100   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:31:28.163394   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:31:28.163417   78377 kubeadm.go:309] 
	I0422 18:31:28.163487   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:31:28.163724   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:31:28.163734   78377 kubeadm.go:309] 
	I0422 18:31:28.163791   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:31:28.163857   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:31:28.164010   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:31:28.164024   78377 kubeadm.go:309] 
	I0422 18:31:28.164159   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:31:28.164207   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:31:28.164251   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:31:28.164265   78377 kubeadm.go:309] 
	I0422 18:31:28.164413   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:31:28.164579   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:31:28.164607   78377 kubeadm.go:309] 
	I0422 18:31:28.164767   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:31:28.164919   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:31:28.165050   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:31:28.165153   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:31:28.165169   78377 kubeadm.go:309] 
	I0422 18:31:28.166948   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:31:28.167081   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:31:28.167206   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0422 18:31:28.167328   78377 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0422 18:31:28.167404   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0422 18:31:28.857637   78377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 18:31:28.875137   78377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 18:31:28.887680   78377 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 18:31:28.887713   78377 kubeadm.go:156] found existing configuration files:
	
	I0422 18:31:28.887768   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 18:31:28.900305   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 18:31:28.900364   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 18:31:28.912825   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 18:31:28.927080   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 18:31:28.927184   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 18:31:28.939052   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.949650   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 18:31:28.949726   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 18:31:28.960782   78377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 18:31:28.972073   78377 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 18:31:28.972131   78377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 18:31:28.983161   78377 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0422 18:31:29.220135   78377 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 18:33:25.762018   78377 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0422 18:33:25.762162   78377 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0422 18:33:25.763935   78377 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0422 18:33:25.763996   78377 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 18:33:25.764109   78377 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 18:33:25.764234   78377 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 18:33:25.764384   78377 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 18:33:25.764478   78377 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 18:33:25.766215   78377 out.go:204]   - Generating certificates and keys ...
	I0422 18:33:25.766332   78377 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 18:33:25.766425   78377 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 18:33:25.766525   78377 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0422 18:33:25.766612   78377 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0422 18:33:25.766680   78377 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0422 18:33:25.766725   78377 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0422 18:33:25.766778   78377 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0422 18:33:25.766829   78377 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0422 18:33:25.766907   78377 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0422 18:33:25.766999   78377 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0422 18:33:25.767062   78377 kubeadm.go:309] [certs] Using the existing "sa" key
	I0422 18:33:25.767150   78377 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 18:33:25.767210   78377 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 18:33:25.767277   78377 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 18:33:25.767378   78377 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 18:33:25.767465   78377 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 18:33:25.767602   78377 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 18:33:25.767714   78377 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 18:33:25.767848   78377 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 18:33:25.767944   78377 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 18:33:25.769378   78377 out.go:204]   - Booting up control plane ...
	I0422 18:33:25.769497   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 18:33:25.769600   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 18:33:25.769691   78377 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 18:33:25.769819   78377 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 18:33:25.769987   78377 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0422 18:33:25.770059   78377 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0422 18:33:25.770164   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770451   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770538   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.770748   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.770827   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771002   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771066   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771264   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771397   78377 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0422 18:33:25.771583   78377 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0422 18:33:25.771594   78377 kubeadm.go:309] 
	I0422 18:33:25.771655   78377 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0422 18:33:25.771711   78377 kubeadm.go:309] 		timed out waiting for the condition
	I0422 18:33:25.771726   78377 kubeadm.go:309] 
	I0422 18:33:25.771779   78377 kubeadm.go:309] 	This error is likely caused by:
	I0422 18:33:25.771836   78377 kubeadm.go:309] 		- The kubelet is not running
	I0422 18:33:25.771973   78377 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0422 18:33:25.771981   78377 kubeadm.go:309] 
	I0422 18:33:25.772091   78377 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0422 18:33:25.772132   78377 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0422 18:33:25.772175   78377 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0422 18:33:25.772182   78377 kubeadm.go:309] 
	I0422 18:33:25.772286   78377 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0422 18:33:25.772374   78377 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0422 18:33:25.772381   78377 kubeadm.go:309] 
	I0422 18:33:25.772491   78377 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0422 18:33:25.772570   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0422 18:33:25.772641   78377 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0422 18:33:25.772702   78377 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0422 18:33:25.772741   78377 kubeadm.go:309] 
	I0422 18:33:25.772767   78377 kubeadm.go:393] duration metric: took 7m59.977108208s to StartCluster
	I0422 18:33:25.772800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0422 18:33:25.772854   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0422 18:33:25.824904   78377 cri.go:89] found id: ""
	I0422 18:33:25.824928   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.824946   78377 logs.go:278] No container was found matching "kube-apiserver"
	I0422 18:33:25.824957   78377 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0422 18:33:25.825011   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0422 18:33:25.864537   78377 cri.go:89] found id: ""
	I0422 18:33:25.864563   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.864570   78377 logs.go:278] No container was found matching "etcd"
	I0422 18:33:25.864575   78377 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0422 18:33:25.864630   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0422 18:33:25.906760   78377 cri.go:89] found id: ""
	I0422 18:33:25.906784   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.906793   78377 logs.go:278] No container was found matching "coredns"
	I0422 18:33:25.906800   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0422 18:33:25.906868   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0422 18:33:25.945325   78377 cri.go:89] found id: ""
	I0422 18:33:25.945347   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.945354   78377 logs.go:278] No container was found matching "kube-scheduler"
	I0422 18:33:25.945360   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0422 18:33:25.945407   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0422 18:33:25.984005   78377 cri.go:89] found id: ""
	I0422 18:33:25.984035   78377 logs.go:276] 0 containers: []
	W0422 18:33:25.984052   78377 logs.go:278] No container was found matching "kube-proxy"
	I0422 18:33:25.984059   78377 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0422 18:33:25.984121   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0422 18:33:26.023499   78377 cri.go:89] found id: ""
	I0422 18:33:26.023525   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.023535   78377 logs.go:278] No container was found matching "kube-controller-manager"
	I0422 18:33:26.023549   78377 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0422 18:33:26.023611   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0422 18:33:26.064439   78377 cri.go:89] found id: ""
	I0422 18:33:26.064468   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.064479   78377 logs.go:278] No container was found matching "kindnet"
	I0422 18:33:26.064487   78377 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0422 18:33:26.064552   78377 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0422 18:33:26.104231   78377 cri.go:89] found id: ""
	I0422 18:33:26.104254   78377 logs.go:276] 0 containers: []
	W0422 18:33:26.104262   78377 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0422 18:33:26.104270   78377 logs.go:123] Gathering logs for CRI-O ...
	I0422 18:33:26.104282   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0422 18:33:26.213826   78377 logs.go:123] Gathering logs for container status ...
	I0422 18:33:26.213871   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:33:26.278837   78377 logs.go:123] Gathering logs for kubelet ...
	I0422 18:33:26.278866   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0422 18:33:26.337634   78377 logs.go:123] Gathering logs for dmesg ...
	I0422 18:33:26.337677   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:33:26.351578   78377 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:33:26.351605   78377 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0422 18:33:26.445108   78377 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0422 18:33:26.445139   78377 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0422 18:33:26.445177   78377 out.go:239] * 
	W0422 18:33:26.445248   78377 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.445279   78377 out.go:239] * 
	W0422 18:33:26.446406   78377 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:33:26.450209   78377 out.go:177] 
	W0422 18:33:26.451494   78377 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0422 18:33:26.451552   78377 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0422 18:33:26.451576   78377 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0422 18:33:26.453333   78377 out.go:177] 
	
	
	==> CRI-O <==
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.186156795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811507186136421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5e47b92-c4fc-4cfc-807f-e1c81cc17fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.186778161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71aefb97-d98b-4e7f-bc2d-88f5c77fbcca name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.186843795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71aefb97-d98b-4e7f-bc2d-88f5c77fbcca name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.186893808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=71aefb97-d98b-4e7f-bc2d-88f5c77fbcca name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.218773941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0daa44e7-3270-4c86-aa32-22048f6843a0 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.218877098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0daa44e7-3270-4c86-aa32-22048f6843a0 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.219743866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e3a9e3d-e1e0-4fa0-b08d-f28a0d838156 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.220231703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811507220207293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e3a9e3d-e1e0-4fa0-b08d-f28a0d838156 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.220802520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7990af37-3a3b-45bd-860f-18e373de51e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.220885484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7990af37-3a3b-45bd-860f-18e373de51e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.220920712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7990af37-3a3b-45bd-860f-18e373de51e3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.253033782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3644b5c2-f681-4670-bcab-33d424af0dcf name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.253113515Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3644b5c2-f681-4670-bcab-33d424af0dcf name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.254542082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9306de35-53df-48fe-afdf-a913b50da75a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.254951614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811507254921911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9306de35-53df-48fe-afdf-a913b50da75a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.255545078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=028fbbcc-e23e-4170-a4b8-13bd3f401f2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.255604191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=028fbbcc-e23e-4170-a4b8-13bd3f401f2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.255635718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=028fbbcc-e23e-4170-a4b8-13bd3f401f2f name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.293344548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=53c766d4-e953-48fe-aec4-531c38dd7a92 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.293537989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=53c766d4-e953-48fe-aec4-531c38dd7a92 name=/runtime.v1.RuntimeService/Version
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.294721024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e58b6fe-a227-4960-987a-f12a82953963 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.295174230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713811507295150324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e58b6fe-a227-4960-987a-f12a82953963 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.295835361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32643133-6f2e-4574-a0cb-b05f45c54169 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.295926707Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32643133-6f2e-4574-a0cb-b05f45c54169 name=/runtime.v1.RuntimeService/ListContainers
	Apr 22 18:45:07 old-k8s-version-367072 crio[648]: time="2024-04-22 18:45:07.296010033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=32643133-6f2e-4574-a0cb-b05f45c54169 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr22 18:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054750] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043660] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Apr22 18:25] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.922715] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.744071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.637131] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.065794] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061682] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.221839] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.164619] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.287340] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +7.158439] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.071484] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.066379] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[ +11.632913] kauditd_printk_skb: 46 callbacks suppressed
	[Apr22 18:29] systemd-fstab-generator[4961]: Ignoring "noauto" option for root device
	[Apr22 18:31] systemd-fstab-generator[5238]: Ignoring "noauto" option for root device
	[  +0.069844] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:45:07 up 20 min,  0 users,  load average: 0.00, 0.05, 0.06
	Linux old-k8s-version-367072 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000c66a20, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b71620, 0x24, 0x0, ...)
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: net.(*Dialer).DialContext(0xc000b526c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b71620, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000b5c740, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b71620, 0x24, 0x60, 0x7f82f1792d20, 0x118, ...)
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: net/http.(*Transport).dial(0xc0003e6280, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b71620, 0x24, 0x0, 0x1, 0x0, ...)
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: net/http.(*Transport).dialConn(0xc0003e6280, 0x4f7fe00, 0xc000120018, 0x0, 0xc000c2c6c0, 0x5, 0xc000b71620, 0x24, 0x0, 0xc000c61c20, ...)
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: net/http.(*Transport).dialConnFor(0xc0003e6280, 0xc00063cc60)
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]: created by net/http.(*Transport).queueForDial
	Apr 22 18:45:02 old-k8s-version-367072 kubelet[6759]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 22 18:45:02 old-k8s-version-367072 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 22 18:45:02 old-k8s-version-367072 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 22 18:45:03 old-k8s-version-367072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 141.
	Apr 22 18:45:03 old-k8s-version-367072 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 22 18:45:03 old-k8s-version-367072 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 22 18:45:03 old-k8s-version-367072 kubelet[6769]: I0422 18:45:03.279825    6769 server.go:416] Version: v1.20.0
	Apr 22 18:45:03 old-k8s-version-367072 kubelet[6769]: I0422 18:45:03.280264    6769 server.go:837] Client rotation is on, will bootstrap in background
	Apr 22 18:45:03 old-k8s-version-367072 kubelet[6769]: I0422 18:45:03.284263    6769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 22 18:45:03 old-k8s-version-367072 kubelet[6769]: W0422 18:45:03.286012    6769 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 22 18:45:03 old-k8s-version-367072 kubelet[6769]: I0422 18:45:03.286757    6769 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 2 (243.431495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-367072" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (155.54s)

                                                
                                    

Test pass (244/311)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.27
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.0/json-events 13.69
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.07
18 TestDownloadOnly/v1.30.0/DeleteAll 0.14
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 112.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 155.04
29 TestAddons/parallel/Registry 16.86
31 TestAddons/parallel/InspektorGadget 34.14
33 TestAddons/parallel/HelmTiller 12.29
35 TestAddons/parallel/CSI 103.84
36 TestAddons/parallel/Headlamp 13.07
37 TestAddons/parallel/CloudSpanner 6.65
38 TestAddons/parallel/LocalPath 55.23
39 TestAddons/parallel/NvidiaDevicePlugin 5.54
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.12
45 TestCertOptions 88.53
46 TestCertExpiration 277.71
48 TestForceSystemdFlag 47.25
49 TestForceSystemdEnv 67.38
51 TestKVMDriverInstallOrUpdate 4.1
55 TestErrorSpam/setup 43.19
56 TestErrorSpam/start 0.38
57 TestErrorSpam/status 0.75
58 TestErrorSpam/pause 1.65
59 TestErrorSpam/unpause 1.64
60 TestErrorSpam/stop 5.87
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 58.88
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 38.14
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.63
72 TestFunctional/serial/CacheCmd/cache/add_local 2.38
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.06
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.12
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
80 TestFunctional/serial/ExtraConfig 30.77
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.62
83 TestFunctional/serial/LogsFileCmd 1.53
84 TestFunctional/serial/InvalidService 4.63
86 TestFunctional/parallel/ConfigCmd 0.41
88 TestFunctional/parallel/DryRun 0.29
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 9.69
95 TestFunctional/parallel/AddonsCmd 0.16
96 TestFunctional/parallel/PersistentVolumeClaim 39.68
98 TestFunctional/parallel/SSHCmd 0.42
99 TestFunctional/parallel/CpCmd 1.39
100 TestFunctional/parallel/MySQL 28.78
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.43
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.59
111 TestFunctional/parallel/Version/short 0.06
112 TestFunctional/parallel/Version/components 0.49
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
117 TestFunctional/parallel/ImageCommands/ImageBuild 6.12
118 TestFunctional/parallel/ImageCommands/Setup 1.94
119 TestFunctional/parallel/ServiceCmd/DeployApp 10.21
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.5
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.76
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.51
132 TestFunctional/parallel/ServiceCmd/List 1.05
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
135 TestFunctional/parallel/ServiceCmd/Format 0.41
136 TestFunctional/parallel/ServiceCmd/URL 0.36
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
138 TestFunctional/parallel/ProfileCmd/profile_list 0.45
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
140 TestFunctional/parallel/MountCmd/any-port 19.52
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.65
142 TestFunctional/parallel/ImageCommands/ImageRemove 1.9
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.63
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.52
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
148 TestFunctional/parallel/MountCmd/specific-port 1.85
149 TestFunctional/parallel/MountCmd/VerifyCleanup 0.78
150 TestFunctional/delete_addon-resizer_images 0.06
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
156 TestMultiControlPlane/serial/StartCluster 200.89
157 TestMultiControlPlane/serial/DeployApp 6.14
158 TestMultiControlPlane/serial/PingHostFromPods 1.35
159 TestMultiControlPlane/serial/AddWorkerNode 48.49
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
162 TestMultiControlPlane/serial/CopyFile 13.55
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.49
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
168 TestMultiControlPlane/serial/DeleteSecondaryNode 17.44
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
171 TestMultiControlPlane/serial/RestartCluster 291.32
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
173 TestMultiControlPlane/serial/AddSecondaryNode 76.16
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
178 TestJSONOutput/start/Command 60.66
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.73
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.71
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.39
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.21
206 TestMainNoArgs 0.05
207 TestMinikubeProfile 89.59
210 TestMountStart/serial/StartWithMountFirst 27.03
211 TestMountStart/serial/VerifyMountFirst 0.39
212 TestMountStart/serial/StartWithMountSecond 28.16
213 TestMountStart/serial/VerifyMountSecond 0.38
214 TestMountStart/serial/DeleteFirst 0.87
215 TestMountStart/serial/VerifyMountPostDelete 0.39
216 TestMountStart/serial/Stop 1.31
217 TestMountStart/serial/RestartStopped 25.18
218 TestMountStart/serial/VerifyMountPostStop 0.39
221 TestMultiNode/serial/FreshStart2Nodes 134.14
222 TestMultiNode/serial/DeployApp2Nodes 5.01
223 TestMultiNode/serial/PingHostFrom2Pods 0.84
224 TestMultiNode/serial/AddNode 41.66
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.23
227 TestMultiNode/serial/CopyFile 7.46
228 TestMultiNode/serial/StopNode 2.43
229 TestMultiNode/serial/StartAfterStop 29.41
231 TestMultiNode/serial/DeleteNode 2.41
233 TestMultiNode/serial/RestartMultiNode 169.47
234 TestMultiNode/serial/ValidateNameConflict 48.07
241 TestScheduledStopUnix 113.64
245 TestRunningBinaryUpgrade 186.55
256 TestNetworkPlugins/group/false 3.89
260 TestStoppedBinaryUpgrade/Setup 2.34
261 TestStoppedBinaryUpgrade/Upgrade 182.67
270 TestPause/serial/Start 68.7
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
275 TestNoKubernetes/serial/StartWithK8s 47.27
276 TestNoKubernetes/serial/StartWithStopK8s 53.86
277 TestNoKubernetes/serial/Start 29.07
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
279 TestNoKubernetes/serial/ProfileList 6.56
280 TestNoKubernetes/serial/Stop 1.38
281 TestNoKubernetes/serial/StartNoArgs 62.06
282 TestNetworkPlugins/group/auto/Start 105.82
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
284 TestNetworkPlugins/group/kindnet/Start 126.92
285 TestNetworkPlugins/group/custom-flannel/Start 115.36
286 TestNetworkPlugins/group/auto/KubeletFlags 0.22
287 TestNetworkPlugins/group/auto/NetCatPod 11.26
288 TestNetworkPlugins/group/auto/DNS 0.18
289 TestNetworkPlugins/group/auto/Localhost 0.17
290 TestNetworkPlugins/group/auto/HairPin 0.2
291 TestNetworkPlugins/group/enable-default-cni/Start 98.48
292 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
293 TestNetworkPlugins/group/flannel/Start 96.78
294 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
295 TestNetworkPlugins/group/kindnet/NetCatPod 10.24
296 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
297 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
298 TestNetworkPlugins/group/kindnet/DNS 0.2
299 TestNetworkPlugins/group/kindnet/Localhost 0.16
300 TestNetworkPlugins/group/kindnet/HairPin 0.19
301 TestNetworkPlugins/group/custom-flannel/DNS 0.17
302 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
303 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
304 TestNetworkPlugins/group/bridge/Start 73.15
305 TestNetworkPlugins/group/calico/Start 134.05
306 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
307 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
308 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
309 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
310 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
311 TestNetworkPlugins/group/flannel/ControllerPod 6.01
312 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
313 TestNetworkPlugins/group/flannel/NetCatPod 12.32
314 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
315 TestNetworkPlugins/group/bridge/NetCatPod 11.34
318 TestNetworkPlugins/group/bridge/DNS 0.2
319 TestNetworkPlugins/group/bridge/Localhost 0.16
320 TestNetworkPlugins/group/bridge/HairPin 0.23
321 TestNetworkPlugins/group/flannel/DNS 0.21
322 TestNetworkPlugins/group/flannel/Localhost 0.2
323 TestNetworkPlugins/group/flannel/HairPin 0.17
325 TestStartStop/group/no-preload/serial/FirstStart 91.63
327 TestStartStop/group/embed-certs/serial/FirstStart 96.51
328 TestNetworkPlugins/group/calico/ControllerPod 5.18
329 TestNetworkPlugins/group/calico/KubeletFlags 0.28
330 TestNetworkPlugins/group/calico/NetCatPod 12.26
331 TestNetworkPlugins/group/calico/DNS 0.19
332 TestNetworkPlugins/group/calico/Localhost 0.13
333 TestNetworkPlugins/group/calico/HairPin 0.15
335 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.94
336 TestStartStop/group/no-preload/serial/DeployApp 10.34
337 TestStartStop/group/embed-certs/serial/DeployApp 11.29
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.26
340 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
342 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.29
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
347 TestStartStop/group/no-preload/serial/SecondStart 694.99
350 TestStartStop/group/embed-certs/serial/SecondStart 614.13
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 598.17
353 TestStartStop/group/old-k8s-version/serial/Stop 2.3
354 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/newest-cni/serial/FirstStart 60.26
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
368 TestStartStop/group/newest-cni/serial/Stop 10.66
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
370 TestStartStop/group/newest-cni/serial/SecondStart 38.36
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
374 TestStartStop/group/newest-cni/serial/Pause 2.42
x
+
TestDownloadOnly/v1.20.0/json-events (22.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-330754 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-330754 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.265424578s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-330754
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-330754: exit status 85 (71.060254ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-330754 | jenkins | v1.33.0 | 22 Apr 24 16:56 UTC |          |
	|         | -p download-only-330754        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:56:55
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:56:55.370462   18896 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:56:55.370567   18896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:56:55.370571   18896 out.go:304] Setting ErrFile to fd 2...
	I0422 16:56:55.370575   18896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:56:55.370780   18896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	W0422 16:56:55.370919   18896 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18706-11572/.minikube/config/config.json: open /home/jenkins/minikube-integration/18706-11572/.minikube/config/config.json: no such file or directory
	I0422 16:56:55.371513   18896 out.go:298] Setting JSON to true
	I0422 16:56:55.372365   18896 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2361,"bootTime":1713802655,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 16:56:55.372431   18896 start.go:139] virtualization: kvm guest
	I0422 16:56:55.375035   18896 out.go:97] [download-only-330754] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 16:56:55.376864   18896 out.go:169] MINIKUBE_LOCATION=18706
	I0422 16:56:55.375176   18896 notify.go:220] Checking for updates...
	W0422 16:56:55.375259   18896 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball: no such file or directory
	I0422 16:56:55.380214   18896 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:56:55.382327   18896 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 16:56:55.384341   18896 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:56:55.386025   18896 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0422 16:56:55.388454   18896 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 16:56:55.388684   18896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:56:55.486770   18896 out.go:97] Using the kvm2 driver based on user configuration
	I0422 16:56:55.486802   18896 start.go:297] selected driver: kvm2
	I0422 16:56:55.486808   18896 start.go:901] validating driver "kvm2" against <nil>
	I0422 16:56:55.487194   18896 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:56:55.487359   18896 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 16:56:55.502594   18896 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 16:56:55.502643   18896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:56:55.503199   18896 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0422 16:56:55.503350   18896 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 16:56:55.503412   18896 cni.go:84] Creating CNI manager for ""
	I0422 16:56:55.503432   18896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:56:55.503439   18896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 16:56:55.503504   18896 start.go:340] cluster config:
	{Name:download-only-330754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-330754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:56:55.503674   18896 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:56:55.505599   18896 out.go:97] Downloading VM boot image ...
	I0422 16:56:55.505643   18896 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0422 16:57:03.842290   18896 out.go:97] Starting "download-only-330754" primary control-plane node in "download-only-330754" cluster
	I0422 16:57:03.842332   18896 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 16:57:03.938741   18896 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0422 16:57:03.938769   18896 cache.go:56] Caching tarball of preloaded images
	I0422 16:57:03.938907   18896 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0422 16:57:03.940833   18896 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0422 16:57:03.940858   18896 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0422 16:57:04.038297   18896 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-330754 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330754"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-330754
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (13.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-029298 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-029298 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.691158805s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (13.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-029298
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-029298: exit status 85 (71.54393ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-330754 | jenkins | v1.33.0 | 22 Apr 24 16:56 UTC |                     |
	|         | -p download-only-330754        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-330754        | download-only-330754 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| start   | -o=json --download-only        | download-only-029298 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | -p download-only-029298        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:57:17
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:57:17.979071   19126 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:57:17.979213   19126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:17.979222   19126 out.go:304] Setting ErrFile to fd 2...
	I0422 16:57:17.979227   19126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:17.979415   19126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 16:57:17.979993   19126 out.go:298] Setting JSON to true
	I0422 16:57:17.980868   19126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2383,"bootTime":1713802655,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 16:57:17.980930   19126 start.go:139] virtualization: kvm guest
	I0422 16:57:17.983174   19126 out.go:97] [download-only-029298] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 16:57:17.984694   19126 out.go:169] MINIKUBE_LOCATION=18706
	I0422 16:57:17.983393   19126 notify.go:220] Checking for updates...
	I0422 16:57:17.987365   19126 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:57:17.988692   19126 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 16:57:17.989888   19126 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 16:57:17.991213   19126 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0422 16:57:17.993459   19126 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 16:57:17.993715   19126 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:57:18.026760   19126 out.go:97] Using the kvm2 driver based on user configuration
	I0422 16:57:18.026798   19126 start.go:297] selected driver: kvm2
	I0422 16:57:18.026803   19126 start.go:901] validating driver "kvm2" against <nil>
	I0422 16:57:18.027135   19126 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:18.027219   19126 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18706-11572/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0422 16:57:18.042702   19126 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0
	I0422 16:57:18.042759   19126 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:57:18.043252   19126 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0422 16:57:18.043386   19126 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 16:57:18.043435   19126 cni.go:84] Creating CNI manager for ""
	I0422 16:57:18.043446   19126 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0422 16:57:18.043458   19126 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 16:57:18.043516   19126 start.go:340] cluster config:
	{Name:download-only-029298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-029298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:57:18.043605   19126 iso.go:125] acquiring lock: {Name:mk1b12d9597d526423aa9e018b261917a87c343d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:18.045498   19126 out.go:97] Starting "download-only-029298" primary control-plane node in "download-only-029298" cluster
	I0422 16:57:18.045531   19126 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 16:57:18.567117   19126 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	I0422 16:57:18.567160   19126 cache.go:56] Caching tarball of preloaded images
	I0422 16:57:18.567341   19126 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime crio
	I0422 16:57:18.569206   19126 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0422 16:57:18.569228   19126 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4 ...
	I0422 16:57:19.126425   19126 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5927bd9d05f26d08fc05540d1d92e5d8 -> /home/jenkins/minikube-integration/18706-11572/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-029298 host does not exist
	  To start a cluster, run: "minikube start -p download-only-029298"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-029298
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-330619 --alsologtostderr --binary-mirror http://127.0.0.1:35745 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-330619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-330619
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (112.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-417483 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-417483 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m51.556833538s)
helpers_test.go:175: Cleaning up "offline-crio-417483" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-417483
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-417483: (1.02390533s)
--- PASS: TestOffline (112.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-934361
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-934361: exit status 85 (64.492035ms)

                                                
                                                
-- stdout --
	* Profile "addons-934361" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-934361"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-934361
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-934361: exit status 85 (63.394598ms)

                                                
                                                
-- stdout --
	* Profile "addons-934361" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-934361"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (155.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-934361 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-934361 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.035194809s)
--- PASS: TestAddons/Setup (155.04s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 23.001314ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-srp9r" [b6334572-9ae2-4f63-8d71-d5ec2df78324] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005698377s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nzg6s" [033a658d-3f50-4962-ac56-dcf30ac650c7] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005449288s
addons_test.go:340: (dbg) Run:  kubectl --context addons-934361 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-934361 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-934361 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.033979078s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (34.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cnbld" [ce382dc7-40bc-45d2-b350-7d0fa3f95d09] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005181773s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-934361
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-934361: (28.132940212s)
--- PASS: TestAddons/parallel/InspektorGadget (34.14s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.29s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 3.935563ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-fp7n8" [8ca5bebc-4067-46c4-b889-2eae5e85437d] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005019115s
addons_test.go:473: (dbg) Run:  kubectl --context addons-934361 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-934361 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.664253801s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (103.84s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 31.006587ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-934361 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/22 17:00:24 [DEBUG] GET http://192.168.39.135:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-934361 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4b68e06c-3cfd-45db-b6cc-37e4440137fe] Pending
helpers_test.go:344: "task-pv-pod" [4b68e06c-3cfd-45db-b6cc-37e4440137fe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4b68e06c-3cfd-45db-b6cc-37e4440137fe] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004424304s
addons_test.go:584: (dbg) Run:  kubectl --context addons-934361 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934361 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-934361 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-934361 delete pod task-pv-pod: (1.239707082s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-934361 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-934361 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-934361 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3a744944-f486-474d-93c5-9fefd3539474] Pending
helpers_test.go:344: "task-pv-pod-restore" [3a744944-f486-474d-93c5-9fefd3539474] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3a744944-f486-474d-93c5-9fefd3539474] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003835856s
addons_test.go:626: (dbg) Run:  kubectl --context addons-934361 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-934361 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-934361 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-934361 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.849909184s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (103.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-934361 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-934361 --alsologtostderr -v=1: (1.066960457s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-jx57l" [05d7c185-fcb3-4db1-941f-58c4cf86a75f] Pending
helpers_test.go:344: "headlamp-7559bf459f-jx57l" [05d7c185-fcb3-4db1-941f-58c4cf86a75f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-jx57l" [05d7c185-fcb3-4db1-941f-58c4cf86a75f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004401056s
--- PASS: TestAddons/parallel/Headlamp (13.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-f4x48" [b56ee286-aea3-43c3-b31c-2492a9fb8d6e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005518183s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-934361
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-934361 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-934361 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e93fe5b1-96c1-439c-877c-e3bb26961afd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e93fe5b1-96c1-439c-877c-e3bb26961afd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e93fe5b1-96c1-439c-877c-e3bb26961afd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004677998s
addons_test.go:891: (dbg) Run:  kubectl --context addons-934361 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 ssh "cat /opt/local-path-provisioner/pvc-0ebcd1de-0138-48d2-b5bd-8d480b1e737e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-934361 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-934361 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-934361 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-934361 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.419208906s)
--- PASS: TestAddons/parallel/LocalPath (55.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ht2fz" [9f97974a-3d52-4db6-9187-920d1c7c72f3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005700764s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-934361
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-dqx5m" [3cca16d5-c0b9-4588-87c2-aa2cdbcbe7d9] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004475118s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-934361 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-934361 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (88.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-709321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-709321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m27.192291746s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-709321 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-709321 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-709321 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-709321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-709321
--- PASS: TestCertOptions (88.53s)

                                                
                                    
x
+
TestCertExpiration (277.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-076896 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-076896 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.600898973s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-076896 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-076896 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (52.040903819s)
helpers_test.go:175: Cleaning up "cert-expiration-076896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-076896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-076896: (1.067711868s)
--- PASS: TestCertExpiration (277.71s)

                                                
                                    
x
+
TestForceSystemdFlag (47.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-461193 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-461193 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.240189026s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-461193 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-461193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-461193
--- PASS: TestForceSystemdFlag (47.25s)

                                                
                                    
x
+
TestForceSystemdEnv (67.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-005444 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-005444 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.355270048s)
helpers_test.go:175: Cleaning up "force-systemd-env-005444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-005444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-005444: (1.023184066s)
--- PASS: TestForceSystemdEnv (67.38s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.1s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.10s)

                                                
                                    
x
+
TestErrorSpam/setup (43.19s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-791946 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-791946 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-791946 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-791946 --driver=kvm2  --container-runtime=crio: (43.191084912s)
--- PASS: TestErrorSpam/setup (43.19s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 status
--- PASS: TestErrorSpam/status (0.75s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (5.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 stop: (2.311078012s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 stop: (2.001006288s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-791946 --log_dir /tmp/nospam-791946 stop: (1.559359236s)
--- PASS: TestErrorSpam/stop (5.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18706-11572/.minikube/files/etc/test/nested/copy/18884/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-005894 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-005894 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (58.884384186s)
--- PASS: TestFunctional/serial/StartWithProxy (58.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-005894 --alsologtostderr -v=8
E0422 17:10:07.902620   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:07.908394   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:07.918726   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:07.939023   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:07.979321   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:08.059676   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:08.220318   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:08.540991   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:09.182100   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:10.463273   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:13.023475   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:18.144401   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:10:28.385429   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-005894 --alsologtostderr -v=8: (38.139919859s)
functional_test.go:659: soft start took 38.140541666s for "functional-005894" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-005894 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 cache add registry.k8s.io/pause:3.1: (1.217580759s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 cache add registry.k8s.io/pause:3.3: (1.249316789s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 cache add registry.k8s.io/pause:latest: (1.15795734s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-005894 /tmp/TestFunctionalserialCacheCmdcacheadd_local697585613/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cache add minikube-local-cache-test:functional-005894
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 cache add minikube-local-cache-test:functional-005894: (2.006540326s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cache delete minikube-local-cache-test:functional-005894
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-005894
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.69486ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 kubectl -- --context functional-005894 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-005894 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (30.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-005894 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0422 17:10:48.866499   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-005894 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (30.77158108s)
functional_test.go:757: restart took 30.77169724s for "functional-005894" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (30.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-005894 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 logs: (1.617043432s)
--- PASS: TestFunctional/serial/LogsCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 logs --file /tmp/TestFunctionalserialLogsFileCmd2291628710/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 logs --file /tmp/TestFunctionalserialLogsFileCmd2291628710/001/logs.txt: (1.530578855s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-005894 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-005894
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-005894: exit status 115 (289.83261ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.154:30096 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-005894 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-005894 delete -f testdata/invalidsvc.yaml: (1.12240669s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 config get cpus: exit status 14 (65.069319ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 config get cpus: exit status 14 (59.594983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-005894 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-005894 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.294665ms)

                                                
                                                
-- stdout --
	* [functional-005894] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:11:48.677937   28294 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:11:48.678087   28294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:11:48.678093   28294 out.go:304] Setting ErrFile to fd 2...
	I0422 17:11:48.678097   28294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:11:48.678338   28294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:11:48.678862   28294 out.go:298] Setting JSON to false
	I0422 17:11:48.679842   28294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3254,"bootTime":1713802655,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:11:48.679906   28294 start.go:139] virtualization: kvm guest
	I0422 17:11:48.682369   28294 out.go:177] * [functional-005894] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 17:11:48.683951   28294 notify.go:220] Checking for updates...
	I0422 17:11:48.683958   28294 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:11:48.685643   28294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:11:48.687035   28294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:11:48.688550   28294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:11:48.689868   28294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:11:48.691184   28294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:11:48.693123   28294 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:11:48.693765   28294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:11:48.693819   28294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:11:48.709232   28294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0422 17:11:48.709731   28294 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:11:48.710309   28294 main.go:141] libmachine: Using API Version  1
	I0422 17:11:48.710325   28294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:11:48.710697   28294 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:11:48.710939   28294 main.go:141] libmachine: (functional-005894) Calling .DriverName
	I0422 17:11:48.711256   28294 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:11:48.711705   28294 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:11:48.711758   28294 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:11:48.728886   28294 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I0422 17:11:48.729398   28294 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:11:48.729889   28294 main.go:141] libmachine: Using API Version  1
	I0422 17:11:48.729913   28294 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:11:48.730276   28294 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:11:48.730456   28294 main.go:141] libmachine: (functional-005894) Calling .DriverName
	I0422 17:11:48.765454   28294 out.go:177] * Using the kvm2 driver based on existing profile
	I0422 17:11:48.766934   28294 start.go:297] selected driver: kvm2
	I0422 17:11:48.766949   28294 start.go:901] validating driver "kvm2" against &{Name:functional-005894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-005894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:11:48.767068   28294 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:11:48.769561   28294 out.go:177] 
	W0422 17:11:48.770883   28294 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0422 17:11:48.772401   28294 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-005894 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-005894 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-005894 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.825887ms)

                                                
                                                
-- stdout --
	* [functional-005894] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:11:48.971992   28350 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:11:48.972263   28350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:11:48.972273   28350 out.go:304] Setting ErrFile to fd 2...
	I0422 17:11:48.972277   28350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:11:48.972540   28350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:11:48.973039   28350 out.go:298] Setting JSON to false
	I0422 17:11:48.973851   28350 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3254,"bootTime":1713802655,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 17:11:48.973913   28350 start.go:139] virtualization: kvm guest
	I0422 17:11:48.975969   28350 out.go:177] * [functional-005894] minikube v1.33.0 sur Ubuntu 20.04 (kvm/amd64)
	I0422 17:11:48.977674   28350 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:11:48.977693   28350 notify.go:220] Checking for updates...
	I0422 17:11:48.979333   28350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:11:48.980981   28350 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 17:11:48.982763   28350 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 17:11:48.984518   28350 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 17:11:48.986223   28350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:11:48.988430   28350 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:11:48.988823   28350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:11:48.988892   28350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:11:49.003790   28350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
	I0422 17:11:49.004209   28350 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:11:49.004710   28350 main.go:141] libmachine: Using API Version  1
	I0422 17:11:49.004731   28350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:11:49.005085   28350 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:11:49.005397   28350 main.go:141] libmachine: (functional-005894) Calling .DriverName
	I0422 17:11:49.005726   28350 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:11:49.006182   28350 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:11:49.006235   28350 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:11:49.020844   28350 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44071
	I0422 17:11:49.021225   28350 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:11:49.021736   28350 main.go:141] libmachine: Using API Version  1
	I0422 17:11:49.021758   28350 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:11:49.022101   28350 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:11:49.022366   28350 main.go:141] libmachine: (functional-005894) Calling .DriverName
	I0422 17:11:49.056477   28350 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0422 17:11:49.058462   28350 start.go:297] selected driver: kvm2
	I0422 17:11:49.058480   28350 start.go:901] validating driver "kvm2" against &{Name:functional-005894 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:functional-005894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:11:49.058583   28350 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:11:49.061055   28350 out.go:177] 
	W0422 17:11:49.062591   28350 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0422 17:11:49.064030   28350 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-005894 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-005894 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-m29cx" [f353c370-a03d-401b-8c2b-e58286e79a22] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-m29cx" [f353c370-a03d-401b-8c2b-e58286e79a22] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004208248s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.154:31260
functional_test.go:1671: http://192.168.39.154:31260: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-m29cx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.154:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.154:31260
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c46927a2-67d8-4562-a268-3a71a9c364c8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005517691s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-005894 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-005894 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-005894 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-005894 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [abcda6be-edd0-47dd-b650-641595c96485] Pending
helpers_test.go:344: "sp-pod" [abcda6be-edd0-47dd-b650-641595c96485] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [abcda6be-edd0-47dd-b650-641595c96485] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004073324s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-005894 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-005894 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-005894 delete -f testdata/storage-provisioner/pod.yaml: (2.758562466s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-005894 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3932b2dd-8d06-4b5d-9e73-095ba4c6d64f] Pending
helpers_test.go:344: "sp-pod" [3932b2dd-8d06-4b5d-9e73-095ba4c6d64f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3932b2dd-8d06-4b5d-9e73-095ba4c6d64f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004977856s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-005894 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh -n functional-005894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cp functional-005894:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3081888471/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh -n functional-005894 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh -n functional-005894 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-005894 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-lcmld" [48333415-f3ca-4489-892e-a86bf7ed8474] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-lcmld" [48333415-f3ca-4489-892e-a86bf7ed8474] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.003864483s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-005894 exec mysql-64454c8b5c-lcmld -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-005894 exec mysql-64454c8b5c-lcmld -- mysql -ppassword -e "show databases;": exit status 1 (182.856335ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-005894 exec mysql-64454c8b5c-lcmld -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-005894 exec mysql-64454c8b5c-lcmld -- mysql -ppassword -e "show databases;": exit status 1 (152.759911ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-005894 exec mysql-64454c8b5c-lcmld -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18884/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /etc/test/nested/copy/18884/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18884.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /etc/ssl/certs/18884.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18884.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /usr/share/ca-certificates/18884.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/188842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /etc/ssl/certs/188842.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/188842.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /usr/share/ca-certificates/188842.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-005894 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh "sudo systemctl is-active docker": exit status 1 (233.759604ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh "sudo systemctl is-active containerd": exit status 1 (230.584405ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-005894 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-005894
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-005894
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-005894 image ls --format short --alsologtostderr:
I0422 17:11:56.768017   28914 out.go:291] Setting OutFile to fd 1 ...
I0422 17:11:56.768637   28914 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:56.768691   28914 out.go:304] Setting ErrFile to fd 2...
I0422 17:11:56.768709   28914 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:56.769195   28914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
I0422 17:11:56.770209   28914 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:56.770313   28914 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:56.770653   28914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:56.770698   28914 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:56.785767   28914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
I0422 17:11:56.786226   28914 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:56.786840   28914 main.go:141] libmachine: Using API Version  1
I0422 17:11:56.786876   28914 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:56.787301   28914 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:56.787561   28914 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:56.789624   28914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:56.789673   28914 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:56.804708   28914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
I0422 17:11:56.805158   28914 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:56.805632   28914 main.go:141] libmachine: Using API Version  1
I0422 17:11:56.805659   28914 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:56.806034   28914 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:56.806235   28914 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:56.806462   28914 ssh_runner.go:195] Run: systemctl --version
I0422 17:11:56.806486   28914 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:56.809707   28914 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:56.810208   28914 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:56.810237   28914 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:56.810490   28914 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:56.810675   28914 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:56.810844   28914 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:56.810995   28914 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:56.891390   28914 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 17:11:56.936929   28914 main.go:141] libmachine: Making call to close driver server
I0422 17:11:56.936946   28914 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:56.937200   28914 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:11:56.937210   28914 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:56.937223   28914 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:56.937231   28914 main.go:141] libmachine: Making call to close driver server
I0422 17:11:56.937240   28914 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:56.937502   28914 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:56.937517   28914 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-005894 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-controller-manager | v1.30.0            | c7aad43836fa5 | 112MB  |
| registry.k8s.io/kube-scheduler          | v1.30.0            | 259c8277fcbbc | 63MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-005894  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-proxy              | v1.30.0            | a0bf559e280cf | 85.9MB |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| docker.io/library/nginx                 | latest             | 2ac752d7aeb1d | 192MB  |
| registry.k8s.io/kube-apiserver          | v1.30.0            | c42f13656d0b2 | 118MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/minikube-local-cache-test     | functional-005894  | b0383e876aef4 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-005894 image ls --format table --alsologtostderr:
I0422 17:11:58.451076   29065 out.go:291] Setting OutFile to fd 1 ...
I0422 17:11:58.451424   29065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:58.451436   29065 out.go:304] Setting ErrFile to fd 2...
I0422 17:11:58.451441   29065 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:58.451683   29065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
I0422 17:11:58.452273   29065 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:58.452424   29065 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:58.452900   29065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:58.452939   29065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:58.467495   29065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
I0422 17:11:58.467981   29065 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:58.468585   29065 main.go:141] libmachine: Using API Version  1
I0422 17:11:58.468608   29065 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:58.469023   29065 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:58.469243   29065 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:58.471243   29065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:58.471299   29065 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:58.485922   29065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37533
I0422 17:11:58.486373   29065 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:58.486906   29065 main.go:141] libmachine: Using API Version  1
I0422 17:11:58.486930   29065 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:58.487222   29065 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:58.487408   29065 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:58.487611   29065 ssh_runner.go:195] Run: systemctl --version
I0422 17:11:58.487642   29065 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:58.490505   29065 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:58.490985   29065 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:58.491023   29065 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:58.491113   29065 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:58.491303   29065 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:58.491431   29065 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:58.491587   29065 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:58.587739   29065 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 17:11:58.714610   29065 main.go:141] libmachine: Making call to close driver server
I0422 17:11:58.714642   29065 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:58.714937   29065 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:58.714956   29065 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:58.714965   29065 main.go:141] libmachine: Making call to close driver server
I0422 17:11:58.714969   29065 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:11:58.714974   29065 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:58.715250   29065 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:58.715264   29065 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-005894 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117609952"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe","registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":
"112170310"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-005894"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],
"size":"150779692"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67","registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"63026502"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"5107333e08a
87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"b0383e876aef4aad24c4af5fb2c319bf102b260d0ff00e09a3130750a7dd2bd9","repoDigests":["localhost/minikube-local-cache-test@sha256:e54d2e50c22fab30511751262bebbdcd88ed50015cec48858cb3a9dc69337184"],"repoTags":["localhost/minikube-local-cache-test:functional-005894"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"e6f1816883972d4be47bd48879a08
919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580","repoDigests":["docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419","docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5"],"repoTags":["docker.io/library/nginx:latest"],"size":"191703878"},{"id":"56cc512116c8f89
4f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":["registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"85932953"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","re
poDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-005894 image ls --format json --alsologtostderr:
I0422 17:11:58.137764   29042 out.go:291] Setting OutFile to fd 1 ...
I0422 17:11:58.137917   29042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:58.137934   29042 out.go:304] Setting ErrFile to fd 2...
I0422 17:11:58.137940   29042 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:58.138264   29042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
I0422 17:11:58.139064   29042 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:58.139238   29042 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:58.139811   29042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:58.139875   29042 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:58.154659   29042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35385
I0422 17:11:58.155144   29042 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:58.155835   29042 main.go:141] libmachine: Using API Version  1
I0422 17:11:58.155884   29042 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:58.156283   29042 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:58.156519   29042 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:58.158431   29042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:58.158468   29042 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:58.173045   29042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33941
I0422 17:11:58.173558   29042 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:58.174067   29042 main.go:141] libmachine: Using API Version  1
I0422 17:11:58.174100   29042 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:58.174452   29042 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:58.174619   29042 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:58.174817   29042 ssh_runner.go:195] Run: systemctl --version
I0422 17:11:58.174840   29042 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:58.177903   29042 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:58.178355   29042 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:58.178386   29042 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:58.178573   29042 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:58.178740   29042 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:58.178898   29042 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:58.179047   29042 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:58.306060   29042 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 17:11:58.392246   29042 main.go:141] libmachine: Making call to close driver server
I0422 17:11:58.392262   29042 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:58.392535   29042 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:58.392550   29042 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:58.392563   29042 main.go:141] libmachine: Making call to close driver server
I0422 17:11:58.392569   29042 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:11:58.392572   29042 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:58.392868   29042 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:58.392891   29042 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:58.392909   29042 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-005894 image ls --format yaml --alsologtostderr:
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: b0383e876aef4aad24c4af5fb2c319bf102b260d0ff00e09a3130750a7dd2bd9
repoDigests:
- localhost/minikube-local-cache-test@sha256:e54d2e50c22fab30511751262bebbdcd88ed50015cec48858cb3a9dc69337184
repoTags:
- localhost/minikube-local-cache-test:functional-005894
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67
- registry.k8s.io/kube-scheduler@sha256:d2c2a1d9de7a42d91bfedba5ed4f58126f9cff702d35419d78ce4e7cb07f3b7a
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "63026502"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe
- registry.k8s.io/kube-controller-manager@sha256:b7622a0826b7690a307eea994e2abc918f35a27a08e30c37b58c9e3f8336a450
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "112170310"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests:
- registry.k8s.io/kube-proxy@sha256:880f26b53295d384d2f1fed06aa4d58567e3038157f70a1151a7dd8ef8afaa68
- registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "85932953"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:31282cf15b67192cd35f847715a9571f5dd4ac0e130290a408a866bd040bcd81
- registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117609952"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 2ac752d7aeb1d9281f708e7c51501c41baf90de15ffc9bca7c5d38b8da41b580
repoDigests:
- docker.io/library/nginx@sha256:0463a96ac74b84a8a1b27f3d1f4ae5d1a70ea823219394e131f5bf3536674419
- docker.io/library/nginx@sha256:b5873c5e785c0ae70b4f999d6719a27441126667088c2edd1eaf3060e4868ec5
repoTags:
- docker.io/library/nginx:latest
size: "191703878"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-005894
size: "34114467"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-005894 image ls --format yaml --alsologtostderr:
I0422 17:11:56.992790   28938 out.go:291] Setting OutFile to fd 1 ...
I0422 17:11:56.992899   28938 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:56.992913   28938 out.go:304] Setting ErrFile to fd 2...
I0422 17:11:56.992919   28938 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:56.993116   28938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
I0422 17:11:56.993694   28938 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:56.993783   28938 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:56.994119   28938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:56.994159   28938 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:57.008800   28938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
I0422 17:11:57.009219   28938 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:57.009755   28938 main.go:141] libmachine: Using API Version  1
I0422 17:11:57.009778   28938 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:57.010148   28938 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:57.010402   28938 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:57.012317   28938 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:57.012360   28938 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:57.027659   28938 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33249
I0422 17:11:57.028082   28938 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:57.028609   28938 main.go:141] libmachine: Using API Version  1
I0422 17:11:57.028635   28938 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:57.028944   28938 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:57.029119   28938 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:57.029324   28938 ssh_runner.go:195] Run: systemctl --version
I0422 17:11:57.029348   28938 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:57.032461   28938 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:57.032888   28938 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:57.032920   28938 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:57.033108   28938 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:57.033293   28938 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:57.033440   28938 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:57.033584   28938 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:57.114540   28938 ssh_runner.go:195] Run: sudo crictl images --output json
I0422 17:11:57.161188   28938 main.go:141] libmachine: Making call to close driver server
I0422 17:11:57.161204   28938 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:57.161469   28938 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:57.161484   28938 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:11:57.161500   28938 main.go:141] libmachine: Making call to close driver server
I0422 17:11:57.161508   28938 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:11:57.161753   28938 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:11:57.161769   28938 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:11:57.161782   28938 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh pgrep buildkitd: exit status 1 (199.279124ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image build -t localhost/my-image:functional-005894 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image build -t localhost/my-image:functional-005894 testdata/build --alsologtostderr: (5.706003961s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-005894 image build -t localhost/my-image:functional-005894 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0c0fa8a543c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-005894
--> b431e61fcd2
Successfully tagged localhost/my-image:functional-005894
b431e61fcd2b6a098d0d72715e2a176916cd886070ddffe6dd822268af51dae0
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-005894 image build -t localhost/my-image:functional-005894 testdata/build --alsologtostderr:
I0422 17:11:57.422658   28992 out.go:291] Setting OutFile to fd 1 ...
I0422 17:11:57.422987   28992 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:57.423005   28992 out.go:304] Setting ErrFile to fd 2...
I0422 17:11:57.423012   28992 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:11:57.423322   28992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
I0422 17:11:57.423973   28992 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:57.424602   28992 config.go:182] Loaded profile config "functional-005894": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
I0422 17:11:57.424976   28992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:57.425015   28992 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:57.439798   28992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
I0422 17:11:57.440349   28992 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:57.440877   28992 main.go:141] libmachine: Using API Version  1
I0422 17:11:57.440897   28992 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:57.441240   28992 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:57.441423   28992 main.go:141] libmachine: (functional-005894) Calling .GetState
I0422 17:11:57.443302   28992 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0422 17:11:57.443344   28992 main.go:141] libmachine: Launching plugin server for driver kvm2
I0422 17:11:57.458685   28992 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43351
I0422 17:11:57.459098   28992 main.go:141] libmachine: () Calling .GetVersion
I0422 17:11:57.459602   28992 main.go:141] libmachine: Using API Version  1
I0422 17:11:57.459631   28992 main.go:141] libmachine: () Calling .SetConfigRaw
I0422 17:11:57.459975   28992 main.go:141] libmachine: () Calling .GetMachineName
I0422 17:11:57.460166   28992 main.go:141] libmachine: (functional-005894) Calling .DriverName
I0422 17:11:57.460371   28992 ssh_runner.go:195] Run: systemctl --version
I0422 17:11:57.460399   28992 main.go:141] libmachine: (functional-005894) Calling .GetSSHHostname
I0422 17:11:57.463540   28992 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:57.464015   28992 main.go:141] libmachine: (functional-005894) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:d5:8a", ip: ""} in network mk-functional-005894: {Iface:virbr1 ExpiryTime:2024-04-22 18:09:08 +0000 UTC Type:0 Mac:52:54:00:89:d5:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-005894 Clientid:01:52:54:00:89:d5:8a}
I0422 17:11:57.464037   28992 main.go:141] libmachine: (functional-005894) DBG | domain functional-005894 has defined IP address 192.168.39.154 and MAC address 52:54:00:89:d5:8a in network mk-functional-005894
I0422 17:11:57.464173   28992 main.go:141] libmachine: (functional-005894) Calling .GetSSHPort
I0422 17:11:57.464391   28992 main.go:141] libmachine: (functional-005894) Calling .GetSSHKeyPath
I0422 17:11:57.464551   28992 main.go:141] libmachine: (functional-005894) Calling .GetSSHUsername
I0422 17:11:57.464724   28992 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/functional-005894/id_rsa Username:docker}
I0422 17:11:57.550395   28992 build_images.go:161] Building image from path: /tmp/build.3488504288.tar
I0422 17:11:57.550475   28992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0422 17:11:57.562312   28992 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3488504288.tar
I0422 17:11:57.567084   28992 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3488504288.tar: stat -c "%s %y" /var/lib/minikube/build/build.3488504288.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3488504288.tar': No such file or directory
I0422 17:11:57.567140   28992 ssh_runner.go:362] scp /tmp/build.3488504288.tar --> /var/lib/minikube/build/build.3488504288.tar (3072 bytes)
I0422 17:11:57.595838   28992 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3488504288
I0422 17:11:57.615913   28992 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3488504288 -xf /var/lib/minikube/build/build.3488504288.tar
I0422 17:11:57.626251   28992 crio.go:315] Building image: /var/lib/minikube/build/build.3488504288
I0422 17:11:57.626316   28992 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-005894 /var/lib/minikube/build/build.3488504288 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0422 17:12:03.022567   28992 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-005894 /var/lib/minikube/build/build.3488504288 --cgroup-manager=cgroupfs: (5.396228538s)
I0422 17:12:03.022635   28992 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3488504288
I0422 17:12:03.044069   28992 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3488504288.tar
I0422 17:12:03.067216   28992 build_images.go:217] Built localhost/my-image:functional-005894 from /tmp/build.3488504288.tar
I0422 17:12:03.067253   28992 build_images.go:133] succeeded building to: functional-005894
I0422 17:12:03.067260   28992 build_images.go:134] failed building to: 
I0422 17:12:03.067280   28992 main.go:141] libmachine: Making call to close driver server
I0422 17:12:03.067297   28992 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:12:03.067640   28992 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:12:03.067660   28992 main.go:141] libmachine: Making call to close connection to plugin binary
I0422 17:12:03.067668   28992 main.go:141] libmachine: Making call to close driver server
I0422 17:12:03.067676   28992 main.go:141] libmachine: (functional-005894) Calling .Close
I0422 17:12:03.067640   28992 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:12:03.067951   28992 main.go:141] libmachine: (functional-005894) DBG | Closing plugin on server side
I0422 17:12:03.068005   28992 main.go:141] libmachine: Successfully made call to close driver server
I0422 17:12:03.068022   28992 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls
E0422 17:12:51.748750   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:15:07.901750   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:15:35.589854   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.920457288s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-005894
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-005894 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-005894 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-nk8tv" [b3f1a142-5e35-4a05-b4aa-7646655fe789] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-nk8tv" [b3f1a142-5e35-4a05-b4aa-7646655fe789] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004662539s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image load --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image load --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr: (4.257742715s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image load --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image load --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr: (2.471218458s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.927433336s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-005894
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image load --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr
E0422 17:11:29.827017   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image load --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr: (5.058962508s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 service list: (1.048004813s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 service list -o json
functional_test.go:1490: Took "542.645938ms" to run "out/minikube-linux-amd64 -p functional-005894 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.154:31907
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.154:31907
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "375.340845ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "69.924601ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "308.130631ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "170.466611ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdany-port188594114/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713805894026980339" to /tmp/TestFunctionalparallelMountCmdany-port188594114/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713805894026980339" to /tmp/TestFunctionalparallelMountCmdany-port188594114/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713805894026980339" to /tmp/TestFunctionalparallelMountCmdany-port188594114/001/test-1713805894026980339
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.328354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 22 17:11 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 22 17:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 22 17:11 test-1713805894026980339
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh cat /mount-9p/test-1713805894026980339
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-005894 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5ae111ac-f8b1-4792-8c80-2c8ccb77c33a] Pending
helpers_test.go:344: "busybox-mount" [5ae111ac-f8b1-4792-8c80-2c8ccb77c33a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5ae111ac-f8b1-4792-8c80-2c8ccb77c33a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5ae111ac-f8b1-4792-8c80-2c8ccb77c33a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.005242202s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-005894 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdany-port188594114/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image save gcr.io/google-containers/addon-resizer:functional-005894 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image save gcr.io/google-containers/addon-resizer:functional-005894 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.647193968s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image rm gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image rm gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr: (1.532446714s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (8.292588907s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-005894
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 image save --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-005894 image save --daemon gcr.io/google-containers/addon-resizer:functional-005894 --alsologtostderr: (1.482916777s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-005894
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdspecific-port1009883609/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (212.278591ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdspecific-port1009883609/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-005894 ssh "sudo umount -f /mount-9p": exit status 1 (262.156078ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-005894 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdspecific-port1009883609/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-005894 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-005894 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-005894 /tmp/TestFunctionalparallelMountCmdVerifyCleanup542442806/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.78s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-005894
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-005894
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-005894
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-025067 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 17:20:07.902472   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-025067 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m20.187291927s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-025067 -- rollout status deployment/busybox: (3.754823659s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-l97ld -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-m6qxt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-tvcmk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-l97ld -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-m6qxt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-tvcmk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-l97ld -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-m6qxt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-tvcmk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-l97ld -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-l97ld -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-m6qxt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-m6qxt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-tvcmk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-025067 -- exec busybox-fc5497c4f-tvcmk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-025067 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-025067 -v=7 --alsologtostderr: (47.614594154s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-025067 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp testdata/cp-test.txt ha-025067:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067:/home/docker/cp-test.txt ha-025067-m02:/home/docker/cp-test_ha-025067_ha-025067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test_ha-025067_ha-025067-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067:/home/docker/cp-test.txt ha-025067-m03:/home/docker/cp-test_ha-025067_ha-025067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test_ha-025067_ha-025067-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067:/home/docker/cp-test.txt ha-025067-m04:/home/docker/cp-test_ha-025067_ha-025067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test_ha-025067_ha-025067-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp testdata/cp-test.txt ha-025067-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m02:/home/docker/cp-test.txt ha-025067:/home/docker/cp-test_ha-025067-m02_ha-025067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test_ha-025067-m02_ha-025067.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m02:/home/docker/cp-test.txt ha-025067-m03:/home/docker/cp-test_ha-025067-m02_ha-025067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test_ha-025067-m02_ha-025067-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m02:/home/docker/cp-test.txt ha-025067-m04:/home/docker/cp-test_ha-025067-m02_ha-025067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test_ha-025067-m02_ha-025067-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp testdata/cp-test.txt ha-025067-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt ha-025067:/home/docker/cp-test_ha-025067-m03_ha-025067.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test_ha-025067-m03_ha-025067.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt ha-025067-m02:/home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt
E0422 17:21:19.002560   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:19.007865   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:19.018176   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:19.038583   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:19.079022   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:21:19.159417   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test.txt"
E0422 17:21:19.320019   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test_ha-025067-m03_ha-025067-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m03:/home/docker/cp-test.txt ha-025067-m04:/home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt
E0422 17:21:19.640723   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test_ha-025067-m03_ha-025067-m04.txt"
E0422 17:21:20.281905   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp testdata/cp-test.txt ha-025067-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile788881982/001/cp-test_ha-025067-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt ha-025067:/home/docker/cp-test_ha-025067-m04_ha-025067.txt
E0422 17:21:21.562547   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067 "sudo cat /home/docker/cp-test_ha-025067-m04_ha-025067.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt ha-025067-m02:/home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m02 "sudo cat /home/docker/cp-test_ha-025067-m04_ha-025067-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 cp ha-025067-m04:/home/docker/cp-test.txt ha-025067-m03:/home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 ssh -n ha-025067-m03 "sudo cat /home/docker/cp-test_ha-025067-m04_ha-025067-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.492058355s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-025067 node delete m03 -v=7 --alsologtostderr: (16.678010575s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (291.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-025067 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 17:35:07.901963   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:36:19.003149   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 17:37:42.051290   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-025067 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m50.564750322s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (291.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-025067 --control-plane -v=7 --alsologtostderr
E0422 17:40:07.902596   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-025067 --control-plane -v=7 --alsologtostderr: (1m15.313085771s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-025067 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.66s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-377935 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0422 17:41:19.003075   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-377935 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m0.659473656s)
--- PASS: TestJSONOutput/start/Command (60.66s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-377935 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-377935 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-377935 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-377935 --output=json --user=testUser: (7.393657329s)
--- PASS: TestJSONOutput/stop/Command (7.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-215162 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-215162 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (80.736269ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18ab41eb-6169-4ea0-b185-061a014b95c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-215162] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec317d4e-3a46-443f-b523-5da8f5eba66f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18706"}}
	{"specversion":"1.0","id":"7a00144a-e8a9-420d-8a4f-7ff96d02f3e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a5aca77a-5697-4d4d-a67a-f10eba5f514f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig"}}
	{"specversion":"1.0","id":"2e292104-e284-4912-828f-17372e600099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube"}}
	{"specversion":"1.0","id":"d61cf921-c81e-43c9-8b10-5972cf90ec25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"042925a3-98ef-4a66-a2de-b1fdf89bd2f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bb45466d-0705-417c-8bc2-b53ccd470675","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-215162" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-215162
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (89.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-860099 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-860099 --driver=kvm2  --container-runtime=crio: (42.988989502s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-863530 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-863530 --driver=kvm2  --container-runtime=crio: (43.917033555s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-860099
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-863530
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-863530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-863530
helpers_test.go:175: Cleaning up "first-860099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-860099
E0422 17:43:10.950861   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
--- PASS: TestMinikubeProfile (89.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-165001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-165001 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.025030869s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165001 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-165001 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-179025 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-179025 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.158460505s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-179025 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-179025 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-165001 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-179025 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-179025 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-179025
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-179025: (1.30778236s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-179025
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-179025: (24.181051368s)
--- PASS: TestMountStart/serial/RestartStopped (25.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-179025 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-179025 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-704531 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 17:45:07.901881   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
E0422 17:46:19.003265   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-704531 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m13.722160812s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-704531 -- rollout status deployment/busybox: (3.435094946s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-bl7n4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-tbmbs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-bl7n4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-tbmbs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-bl7n4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-tbmbs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-bl7n4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-bl7n4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-tbmbs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-704531 -- exec busybox-fc5497c4f-tbmbs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-704531 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-704531 -v 3 --alsologtostderr: (41.089966936s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-704531 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp testdata/cp-test.txt multinode-704531:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile478955910/001/cp-test_multinode-704531.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531:/home/docker/cp-test.txt multinode-704531-m02:/home/docker/cp-test_multinode-704531_multinode-704531-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m02 "sudo cat /home/docker/cp-test_multinode-704531_multinode-704531-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531:/home/docker/cp-test.txt multinode-704531-m03:/home/docker/cp-test_multinode-704531_multinode-704531-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m03 "sudo cat /home/docker/cp-test_multinode-704531_multinode-704531-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp testdata/cp-test.txt multinode-704531-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile478955910/001/cp-test_multinode-704531-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt multinode-704531:/home/docker/cp-test_multinode-704531-m02_multinode-704531.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531 "sudo cat /home/docker/cp-test_multinode-704531-m02_multinode-704531.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531-m02:/home/docker/cp-test.txt multinode-704531-m03:/home/docker/cp-test_multinode-704531-m02_multinode-704531-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m03 "sudo cat /home/docker/cp-test_multinode-704531-m02_multinode-704531-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp testdata/cp-test.txt multinode-704531-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile478955910/001/cp-test_multinode-704531-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt multinode-704531:/home/docker/cp-test_multinode-704531-m03_multinode-704531.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531 "sudo cat /home/docker/cp-test_multinode-704531-m03_multinode-704531.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 cp multinode-704531-m03:/home/docker/cp-test.txt multinode-704531-m02:/home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 ssh -n multinode-704531-m02 "sudo cat /home/docker/cp-test_multinode-704531-m03_multinode-704531-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-704531 node stop m03: (1.563536493s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-704531 status: exit status 7 (435.426519ms)

                                                
                                                
-- stdout --
	multinode-704531
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-704531-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-704531-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-704531 status --alsologtostderr: exit status 7 (431.747955ms)

                                                
                                                
-- stdout --
	multinode-704531
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-704531-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-704531-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:47:47.948384   47752 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:47:47.948498   47752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:47:47.948507   47752 out.go:304] Setting ErrFile to fd 2...
	I0422 17:47:47.948511   47752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:47:47.948716   47752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 17:47:47.948926   47752 out.go:298] Setting JSON to false
	I0422 17:47:47.948952   47752 mustload.go:65] Loading cluster: multinode-704531
	I0422 17:47:47.948982   47752 notify.go:220] Checking for updates...
	I0422 17:47:47.949398   47752 config.go:182] Loaded profile config "multinode-704531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 17:47:47.949417   47752 status.go:255] checking status of multinode-704531 ...
	I0422 17:47:47.949842   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:47.949883   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:47.970880   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41649
	I0422 17:47:47.971367   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:47.972020   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:47.972051   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:47.972426   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:47.972613   47752 main.go:141] libmachine: (multinode-704531) Calling .GetState
	I0422 17:47:47.974246   47752 status.go:330] multinode-704531 host status = "Running" (err=<nil>)
	I0422 17:47:47.974279   47752 host.go:66] Checking if "multinode-704531" exists ...
	I0422 17:47:47.974546   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:47.974594   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:47.989052   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0422 17:47:47.989491   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:47.989914   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:47.989932   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:47.990234   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:47.990394   47752 main.go:141] libmachine: (multinode-704531) Calling .GetIP
	I0422 17:47:47.993031   47752 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:47:47.993401   47752 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:47:47.993429   47752 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:47:47.993513   47752 host.go:66] Checking if "multinode-704531" exists ...
	I0422 17:47:47.993829   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:47.993876   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:48.009628   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0422 17:47:48.010028   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:48.010420   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:48.010447   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:48.010729   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:48.010916   47752 main.go:141] libmachine: (multinode-704531) Calling .DriverName
	I0422 17:47:48.011077   47752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:47:48.011111   47752 main.go:141] libmachine: (multinode-704531) Calling .GetSSHHostname
	I0422 17:47:48.013790   47752 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:47:48.014144   47752 main.go:141] libmachine: (multinode-704531) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:35:02", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:44:51 +0000 UTC Type:0 Mac:52:54:00:90:35:02 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:multinode-704531 Clientid:01:52:54:00:90:35:02}
	I0422 17:47:48.014176   47752 main.go:141] libmachine: (multinode-704531) DBG | domain multinode-704531 has defined IP address 192.168.39.41 and MAC address 52:54:00:90:35:02 in network mk-multinode-704531
	I0422 17:47:48.014293   47752 main.go:141] libmachine: (multinode-704531) Calling .GetSSHPort
	I0422 17:47:48.014452   47752 main.go:141] libmachine: (multinode-704531) Calling .GetSSHKeyPath
	I0422 17:47:48.014613   47752 main.go:141] libmachine: (multinode-704531) Calling .GetSSHUsername
	I0422 17:47:48.014753   47752 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531/id_rsa Username:docker}
	I0422 17:47:48.094706   47752 ssh_runner.go:195] Run: systemctl --version
	I0422 17:47:48.103431   47752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:47:48.117668   47752 kubeconfig.go:125] found "multinode-704531" server: "https://192.168.39.41:8443"
	I0422 17:47:48.117697   47752 api_server.go:166] Checking apiserver status ...
	I0422 17:47:48.117730   47752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:47:48.131929   47752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1127/cgroup
	W0422 17:47:48.141510   47752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1127/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0422 17:47:48.141553   47752 ssh_runner.go:195] Run: ls
	I0422 17:47:48.146218   47752 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I0422 17:47:48.150324   47752 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I0422 17:47:48.150345   47752 status.go:422] multinode-704531 apiserver status = Running (err=<nil>)
	I0422 17:47:48.150353   47752 status.go:257] multinode-704531 status: &{Name:multinode-704531 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:47:48.150369   47752 status.go:255] checking status of multinode-704531-m02 ...
	I0422 17:47:48.150665   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:48.150700   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:48.165749   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0422 17:47:48.166234   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:48.166752   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:48.166773   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:48.167103   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:48.167309   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .GetState
	I0422 17:47:48.168804   47752 status.go:330] multinode-704531-m02 host status = "Running" (err=<nil>)
	I0422 17:47:48.168819   47752 host.go:66] Checking if "multinode-704531-m02" exists ...
	I0422 17:47:48.169084   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:48.169118   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:48.184276   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45067
	I0422 17:47:48.184697   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:48.185131   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:48.185155   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:48.185435   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:48.185607   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .GetIP
	I0422 17:47:48.188071   47752 main.go:141] libmachine: (multinode-704531-m02) DBG | domain multinode-704531-m02 has defined MAC address 52:54:00:41:a7:6d in network mk-multinode-704531
	I0422 17:47:48.188475   47752 main.go:141] libmachine: (multinode-704531-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a7:6d", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:46:23 +0000 UTC Type:0 Mac:52:54:00:41:a7:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-704531-m02 Clientid:01:52:54:00:41:a7:6d}
	I0422 17:47:48.188501   47752 main.go:141] libmachine: (multinode-704531-m02) DBG | domain multinode-704531-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:41:a7:6d in network mk-multinode-704531
	I0422 17:47:48.188586   47752 host.go:66] Checking if "multinode-704531-m02" exists ...
	I0422 17:47:48.188931   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:48.188989   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:48.203623   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36103
	I0422 17:47:48.204060   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:48.204489   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:48.204508   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:48.204877   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:48.205060   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .DriverName
	I0422 17:47:48.205250   47752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:47:48.205266   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .GetSSHHostname
	I0422 17:47:48.207851   47752 main.go:141] libmachine: (multinode-704531-m02) DBG | domain multinode-704531-m02 has defined MAC address 52:54:00:41:a7:6d in network mk-multinode-704531
	I0422 17:47:48.208310   47752 main.go:141] libmachine: (multinode-704531-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:a7:6d", ip: ""} in network mk-multinode-704531: {Iface:virbr1 ExpiryTime:2024-04-22 18:46:23 +0000 UTC Type:0 Mac:52:54:00:41:a7:6d Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:multinode-704531-m02 Clientid:01:52:54:00:41:a7:6d}
	I0422 17:47:48.208362   47752 main.go:141] libmachine: (multinode-704531-m02) DBG | domain multinode-704531-m02 has defined IP address 192.168.39.19 and MAC address 52:54:00:41:a7:6d in network mk-multinode-704531
	I0422 17:47:48.208476   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .GetSSHPort
	I0422 17:47:48.208648   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .GetSSHKeyPath
	I0422 17:47:48.208776   47752 main.go:141] libmachine: (multinode-704531-m02) Calling .GetSSHUsername
	I0422 17:47:48.208933   47752 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18706-11572/.minikube/machines/multinode-704531-m02/id_rsa Username:docker}
	I0422 17:47:48.291050   47752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:47:48.308156   47752 status.go:257] multinode-704531-m02 status: &{Name:multinode-704531-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:47:48.308188   47752 status.go:255] checking status of multinode-704531-m03 ...
	I0422 17:47:48.308538   47752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0422 17:47:48.308573   47752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0422 17:47:48.323879   47752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0422 17:47:48.324292   47752 main.go:141] libmachine: () Calling .GetVersion
	I0422 17:47:48.324722   47752 main.go:141] libmachine: Using API Version  1
	I0422 17:47:48.324741   47752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0422 17:47:48.325061   47752 main.go:141] libmachine: () Calling .GetMachineName
	I0422 17:47:48.325235   47752 main.go:141] libmachine: (multinode-704531-m03) Calling .GetState
	I0422 17:47:48.326623   47752 status.go:330] multinode-704531-m03 host status = "Stopped" (err=<nil>)
	I0422 17:47:48.326636   47752 status.go:343] host is not running, skipping remaining checks
	I0422 17:47:48.326642   47752 status.go:257] multinode-704531-m03 status: &{Name:multinode-704531-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-704531 node start m03 -v=7 --alsologtostderr: (28.779038913s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-704531 node delete m03: (1.874947442s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (169.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-704531 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0422 17:56:19.002512   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-704531 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m48.92253965s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-704531 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (169.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-704531
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-704531-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-704531-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.848206ms)

                                                
                                                
-- stdout --
	* [multinode-704531-m02] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-704531-m02' is duplicated with machine name 'multinode-704531-m02' in profile 'multinode-704531'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-704531-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-704531-m03 --driver=kvm2  --container-runtime=crio: (46.738733951s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-704531
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-704531: exit status 80 (228.838418ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-704531 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-704531-m03 already exists in multinode-704531-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-704531-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.07s)

                                                
                                    
x
+
TestScheduledStopUnix (113.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-611149 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-611149 --memory=2048 --driver=kvm2  --container-runtime=crio: (41.932628937s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611149 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-611149 -n scheduled-stop-611149
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611149 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611149 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-611149 -n scheduled-stop-611149
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-611149
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611149 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-611149
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-611149: exit status 7 (74.748483ms)

                                                
                                                
-- stdout --
	scheduled-stop-611149
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-611149 -n scheduled-stop-611149
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-611149 -n scheduled-stop-611149: exit status 7 (78.021038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-611149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-611149
--- PASS: TestScheduledStopUnix (113.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (186.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.137145842 start -p running-upgrade-759056 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.137145842 start -p running-upgrade-759056 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m41.021503841s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-759056 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-759056 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.007393841s)
helpers_test.go:175: Cleaning up "running-upgrade-759056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-759056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-759056: (1.194298972s)
--- PASS: TestRunningBinaryUpgrade (186.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-457191 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-457191 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.570816ms)

                                                
                                                
-- stdout --
	* [false-457191] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:04:25.920011   55211 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:04:25.920145   55211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:04:25.920157   55211 out.go:304] Setting ErrFile to fd 2...
	I0422 18:04:25.920163   55211 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:04:25.920458   55211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-11572/.minikube/bin
	I0422 18:04:25.921258   55211 out.go:298] Setting JSON to false
	I0422 18:04:25.922531   55211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6411,"bootTime":1713802655,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0422 18:04:25.922623   55211 start.go:139] virtualization: kvm guest
	I0422 18:04:25.925073   55211 out.go:177] * [false-457191] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	I0422 18:04:25.927006   55211 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:04:25.926966   55211 notify.go:220] Checking for updates...
	I0422 18:04:25.928587   55211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:04:25.930105   55211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	I0422 18:04:25.931731   55211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	I0422 18:04:25.933058   55211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0422 18:04:25.934603   55211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:04:25.936144   55211 config.go:182] Loaded profile config "force-systemd-flag-461193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:04:25.936260   55211 config.go:182] Loaded profile config "kubernetes-upgrade-432126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0422 18:04:25.936343   55211 config.go:182] Loaded profile config "offline-crio-417483": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0
	I0422 18:04:25.936443   55211 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:04:25.973476   55211 out.go:177] * Using the kvm2 driver based on user configuration
	I0422 18:04:25.975230   55211 start.go:297] selected driver: kvm2
	I0422 18:04:25.975249   55211 start.go:901] validating driver "kvm2" against <nil>
	I0422 18:04:25.975260   55211 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:04:25.977596   55211 out.go:177] 
	W0422 18:04:25.979227   55211 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0422 18:04:25.980807   55211 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-457191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-457191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-457191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-457191"

                                                
                                                
----------------------- debugLogs end: false-457191 [took: 3.430688269s] --------------------------------
helpers_test.go:175: Cleaning up "false-457191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-457191
--- PASS: TestNetworkPlugins/group/false (3.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (182.67s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2400290290 start -p stopped-upgrade-310712 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0422 18:05:07.902713   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2400290290 start -p stopped-upgrade-310712 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m54.73830552s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2400290290 -p stopped-upgrade-310712 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2400290290 -p stopped-upgrade-310712 stop: (2.326558878s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-310712 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-310712 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.60236824s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (182.67s)

                                                
                                    
x
+
TestPause/serial/Start (68.7s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-765072 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0422 18:06:19.002392   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-765072 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m8.696119802s)
--- PASS: TestPause/serial/Start (68.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-310712
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-310712: (1.090409828s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-799191 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-799191 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (75.292838ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-799191] minikube v1.33.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-11572/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-11572/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-799191 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-799191 --driver=kvm2  --container-runtime=crio: (47.00539541s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-799191 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (53.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-799191 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-799191 --no-kubernetes --driver=kvm2  --container-runtime=crio: (52.541532251s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-799191 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-799191 status -o json: exit status 2 (238.246876ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-799191","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-799191
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-799191: (1.08385519s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (53.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-799191 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-799191 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.066658721s)
--- PASS: TestNoKubernetes/serial/Start (29.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-799191 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-799191 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.241725ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.595856512s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.96274983s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-799191
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-799191: (1.379691015s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-799191 --driver=kvm2  --container-runtime=crio
E0422 18:10:07.901731   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/addons-934361/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-799191 --driver=kvm2  --container-runtime=crio: (1m2.056394608s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m45.819728958s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-799191 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-799191 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.007512ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (126.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0422 18:11:19.003029   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (2m6.924245709s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (126.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (115.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m55.360494918s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (115.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-txxs9" [5cbd3ae5-56db-43df-9d3c-91482665e0f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-txxs9" [5cbd3ae5-56db-43df-9d3c-91482665e0f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004899762s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (98.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m38.476397369s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (98.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mm8mh" [ef847566-01b4-45af-aa6f-f926023eb924] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004423902s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m36.783483585s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5lzrj" [49f6aebe-c9b6-42f6-bb1f-2949097f1ad1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5lzrj" [49f6aebe-c9b6-42f6-bb1f-2949097f1ad1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005348743s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gjmsv" [9c60dbad-b446-4bcc-8253-7e33ac338401] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gjmsv" [9c60dbad-b446-4bcc-8253-7e33ac338401] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004776211s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m13.148630027s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (134.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-457191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m14.047723421s)
--- PASS: TestNetworkPlugins/group/calico/Start (134.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pfddm" [ba5395cb-3339-4478-9d79-d717408c0234] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pfddm" [ba5395cb-3339-4478-9d79-d717408c0234] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004849881s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cmrvs" [2e81f6e9-c268-4afa-813d-0d455482f57b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005105468s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pz9k4" [43b9c4f7-53fd-41b0-a197-6ebb320e936a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-pz9k4" [43b9c4f7-53fd-41b0-a197-6ebb320e936a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004638397s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-47xbw" [ab26c00b-5161-4d98-a9d8-a3426568d920] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-47xbw" [ab26c00b-5161-4d98-a9d8-a3426568d920] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006650283s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (91.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-407991 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-407991 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m31.633495692s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (91.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-782377 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-782377 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m36.507705316s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qhdjt" [1dd1b148-82d8-4b89-bda6-4229b093a101] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.177204193s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-457191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-457191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-q6p4v" [f2cc9637-bbe6-4a0f-9086-44e81315cad8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-q6p4v" [f2cc9637-bbe6-4a0f-9086-44e81315cad8] Running
E0422 18:16:19.002876   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003957017s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-457191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-457191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-856422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-856422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (59.938373697s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-407991 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1defa509-7376-49cf-9825-5a0f6cdc8243] Pending
helpers_test.go:344: "busybox" [1defa509-7376-49cf-9825-5a0f6cdc8243] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1defa509-7376-49cf-9825-5a0f6cdc8243] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0062504s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-407991 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782377 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d1cccda1-fa83-49e7-bb67-35558f318c35] Pending
helpers_test.go:344: "busybox" [d1cccda1-fa83-49e7-bb67-35558f318c35] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d1cccda1-fa83-49e7-bb67-35558f318c35] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003724349s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-782377 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-407991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-407991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.184973772s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-407991 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-782377 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-782377 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f505b39b-6b5d-459a-9177-4cb47fdc40b5] Pending
helpers_test.go:344: "busybox" [f505b39b-6b5d-459a-9177-4cb47fdc40b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0422 18:17:45.496354   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:45.501664   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:45.511968   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:45.532332   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:45.572663   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:45.653307   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f505b39b-6b5d-459a-9177-4cb47fdc40b5] Running
E0422 18:17:45.814103   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:46.134621   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:46.775265   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:48.055989   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:17:50.616386   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003521976s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-856422 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-856422 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (694.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-407991 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0422 18:19:42.260555   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:19:42.285707   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-407991 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (11m34.718792187s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-407991 -n no-preload-407991
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (694.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (614.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-782377 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0422 18:19:50.047507   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.052764   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.063068   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.083410   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.123772   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.204188   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.364891   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:50.685243   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:51.326400   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:19:52.607413   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-782377 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (10m13.840239329s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-782377 -n embed-certs-782377
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (614.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (598.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-856422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
E0422 18:20:29.338216   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/auto-457191/client.crt: no such file or directory
E0422 18:20:31.010602   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:20:38.180631   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:20:53.040187   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/kindnet-457191/client.crt: no such file or directory
E0422 18:20:59.089572   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/enable-default-cni-457191/client.crt: no such file or directory
E0422 18:21:03.541226   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:03.546478   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:03.556754   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:03.577073   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:03.617378   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:03.697781   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:03.858231   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:04.178880   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:04.181136   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/custom-flannel-457191/client.crt: no such file or directory
E0422 18:21:04.820032   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:06.100820   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:08.661570   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:11.971300   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/flannel-457191/client.crt: no such file or directory
E0422 18:21:13.781730   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
E0422 18:21:19.002381   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
E0422 18:21:19.141768   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/bridge-457191/client.crt: no such file or directory
E0422 18:21:24.022830   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-856422 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (9m57.901006196s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-856422 -n default-k8s-diff-port-856422
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (598.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-367072 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-367072 --alsologtostderr -v=3: (2.303049293s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-367072 -n old-k8s-version-367072: exit status 7 (74.708801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-367072 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0422 18:21:44.503936   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/calico-457191/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-505212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-505212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (1m0.26191182s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-505212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-505212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.199805196s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-505212 --alsologtostderr -v=3
E0422 18:46:19.002716   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/functional-005894/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-505212 --alsologtostderr -v=3: (10.655443161s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-505212 -n newest-cni-505212
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-505212 -n newest-cni-505212: exit status 7 (74.694913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-505212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-505212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-505212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0: (38.098919347s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-505212 -n newest-cni-505212
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-505212 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-505212 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-505212 -n newest-cni-505212
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-505212 -n newest-cni-505212: exit status 2 (244.858202ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-505212 -n newest-cni-505212
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-505212 -n newest-cni-505212: exit status 2 (243.959305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-505212 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-505212 -n newest-cni-505212
E0422 18:47:02.857067   18884 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-11572/.minikube/profiles/no-preload-407991/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-505212 -n newest-cni-505212
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                    

Test skip (36/311)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.0/cached-images 0
15 TestDownloadOnly/v1.30.0/binaries 0
16 TestDownloadOnly/v1.30.0/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
251 TestNetworkPlugins/group/kubenet 3.25
259 TestNetworkPlugins/group/cilium 3.7
267 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-457191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-457191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-457191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-457191"

                                                
                                                
----------------------- debugLogs end: kubenet-457191 [took: 3.1075869s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-457191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-457191
--- SKIP: TestNetworkPlugins/group/kubenet (3.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-457191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-457191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-457191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-457191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-457191"

                                                
                                                
----------------------- debugLogs end: cilium-457191 [took: 3.538553146s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-457191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-457191
--- SKIP: TestNetworkPlugins/group/cilium (3.70s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-944223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-944223
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard